2026-03-17 00:00:06.748571 | Job console starting 2026-03-17 00:00:06.789662 | Updating git repos 2026-03-17 00:00:07.204041 | Cloning repos into workspace 2026-03-17 00:00:07.515267 | Restoring repo states 2026-03-17 00:00:07.539029 | Merging changes 2026-03-17 00:00:07.539055 | Checking out repos 2026-03-17 00:00:07.932379 | Preparing playbooks 2026-03-17 00:00:09.159343 | Running Ansible setup 2026-03-17 00:00:17.854474 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-17 00:00:20.422013 | 2026-03-17 00:00:20.422190 | PLAY [Base pre] 2026-03-17 00:00:20.468514 | 2026-03-17 00:00:20.468710 | TASK [Setup log path fact] 2026-03-17 00:00:20.521395 | orchestrator | ok 2026-03-17 00:00:20.581830 | 2026-03-17 00:00:20.582011 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-17 00:00:20.645530 | orchestrator | ok 2026-03-17 00:00:20.667885 | 2026-03-17 00:00:20.668033 | TASK [emit-job-header : Print job information] 2026-03-17 00:00:20.725547 | # Job Information 2026-03-17 00:00:20.725771 | Ansible Version: 2.16.14 2026-03-17 00:00:20.725807 | Job: testbed-deploy-stable-in-a-nutshell-with-tempest-ubuntu-24.04 2026-03-17 00:00:20.725840 | Pipeline: periodic-midnight 2026-03-17 00:00:20.725863 | Executor: 521e9411259a 2026-03-17 00:00:20.725884 | Triggered by: https://github.com/osism/testbed 2026-03-17 00:00:20.725906 | Event ID: 7f7c98d488164e9a90f8fe7794c9d4c5 2026-03-17 00:00:20.735021 | 2026-03-17 00:00:20.735159 | LOOP [emit-job-header : Print node information] 2026-03-17 00:00:20.936894 | orchestrator | ok: 2026-03-17 00:00:20.937185 | orchestrator | # Node Information 2026-03-17 00:00:20.937264 | orchestrator | Inventory Hostname: orchestrator 2026-03-17 00:00:20.937291 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-17 00:00:20.937313 | orchestrator | Username: zuul-testbed06 2026-03-17 00:00:20.937333 | orchestrator | Distro: Debian 12.13 2026-03-17 00:00:20.937356 | orchestrator | Provider: static-testbed 2026-03-17 00:00:20.937378 | orchestrator | Region: 2026-03-17 00:00:20.937399 | orchestrator | Label: testbed-orchestrator 2026-03-17 00:00:20.937420 | orchestrator | Product Name: OpenStack Nova 2026-03-17 00:00:20.937439 | orchestrator | Interface IP: 81.163.193.140 2026-03-17 00:00:20.970306 | 2026-03-17 00:00:20.970461 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-17 00:00:22.629691 | orchestrator -> localhost | changed 2026-03-17 00:00:22.652580 | 2026-03-17 00:00:22.654923 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-17 00:00:25.833601 | orchestrator -> localhost | changed 2026-03-17 00:00:25.864725 | 2026-03-17 00:00:25.864827 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-17 00:00:26.706059 | orchestrator -> localhost | ok 2026-03-17 00:00:26.711690 | 2026-03-17 00:00:26.711783 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-17 00:00:26.760049 | orchestrator | ok 2026-03-17 00:00:26.790714 | orchestrator | included: /var/lib/zuul/builds/9d2318408dc845a1bb8697a007f9fb34/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-17 00:00:26.826914 | 2026-03-17 00:00:26.827097 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-17 00:00:31.306110 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-17 00:00:31.308830 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/9d2318408dc845a1bb8697a007f9fb34/work/9d2318408dc845a1bb8697a007f9fb34_id_rsa 2026-03-17 00:00:31.309720 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/9d2318408dc845a1bb8697a007f9fb34/work/9d2318408dc845a1bb8697a007f9fb34_id_rsa.pub 2026-03-17 00:00:31.309760 | orchestrator -> localhost | The key fingerprint is: 2026-03-17 00:00:31.309791 | orchestrator -> localhost | SHA256:ybigLP3Nl3GP0FH1nBGN82NwKzRYGlCxSaejyBhM2FA zuul-build-sshkey 2026-03-17 00:00:31.309816 | orchestrator -> localhost | The key's randomart image is: 2026-03-17 00:00:31.309849 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-17 00:00:31.309872 | orchestrator -> localhost | | .=E .o=++.+o| 2026-03-17 00:00:31.309894 | orchestrator -> localhost | | .o. ..O+o++| 2026-03-17 00:00:31.309915 | orchestrator -> localhost | | o B. +++| 2026-03-17 00:00:31.309935 | orchestrator -> localhost | | +o..o .. +.| 2026-03-17 00:00:31.309955 | orchestrator -> localhost | | ...oSo . o .| 2026-03-17 00:00:31.309978 | orchestrator -> localhost | | o . . .o o | 2026-03-17 00:00:31.309998 | orchestrator -> localhost | |. + . = o | 2026-03-17 00:00:31.310018 | orchestrator -> localhost | | . . o o . . | 2026-03-17 00:00:31.310038 | orchestrator -> localhost | | . o. | 2026-03-17 00:00:31.310058 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-17 00:00:31.310118 | orchestrator -> localhost | ok: Runtime: 0:00:02.753477 2026-03-17 00:00:31.329732 | 2026-03-17 00:00:31.330022 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-17 00:00:31.376095 | orchestrator | ok 2026-03-17 00:00:31.405397 | orchestrator | included: /var/lib/zuul/builds/9d2318408dc845a1bb8697a007f9fb34/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-17 00:00:31.443025 | 2026-03-17 00:00:31.443127 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-17 00:00:31.496331 | orchestrator | skipping: Conditional result was False 2026-03-17 00:00:31.502808 | 2026-03-17 00:00:31.502917 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-17 00:00:32.687316 | orchestrator | changed 2026-03-17 00:00:32.693232 | 2026-03-17 00:00:32.693324 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-17 00:00:32.975808 | orchestrator | ok 2026-03-17 00:00:32.990476 | 2026-03-17 00:00:32.990569 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-17 00:00:33.464293 | orchestrator | ok 2026-03-17 00:00:33.478962 | 2026-03-17 00:00:33.479095 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-17 00:00:34.020194 | orchestrator | ok 2026-03-17 00:00:34.026775 | 2026-03-17 00:00:34.026889 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-17 00:00:34.080084 | orchestrator | skipping: Conditional result was False 2026-03-17 00:00:34.085603 | 2026-03-17 00:00:34.085736 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-17 00:00:35.178470 | orchestrator -> localhost | changed 2026-03-17 00:00:35.209661 | 2026-03-17 00:00:35.209760 | TASK [add-build-sshkey : Add back temp key] 2026-03-17 00:00:36.242961 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/9d2318408dc845a1bb8697a007f9fb34/work/9d2318408dc845a1bb8697a007f9fb34_id_rsa (zuul-build-sshkey) 2026-03-17 00:00:36.243149 | orchestrator -> localhost | ok: Runtime: 0:00:00.032288 2026-03-17 00:00:36.249047 | 2026-03-17 00:00:36.249139 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-17 00:00:36.963148 | orchestrator | ok 2026-03-17 00:00:36.967994 | 2026-03-17 00:00:36.968074 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-17 00:00:37.035909 | orchestrator | skipping: Conditional result was False 2026-03-17 00:00:37.106736 | 2026-03-17 00:00:37.106848 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-17 00:00:37.631733 | orchestrator | ok 2026-03-17 00:00:37.655515 | 2026-03-17 00:00:37.655617 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-17 00:00:37.719007 | orchestrator | ok 2026-03-17 00:00:37.725021 | 2026-03-17 00:00:37.725107 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-17 00:00:38.413264 | orchestrator -> localhost | ok 2026-03-17 00:00:38.420128 | 2026-03-17 00:00:38.420227 | TASK [validate-host : Collect information about the host] 2026-03-17 00:00:39.831908 | orchestrator | ok 2026-03-17 00:00:39.856860 | 2026-03-17 00:00:39.856968 | TASK [validate-host : Sanitize hostname] 2026-03-17 00:00:39.942960 | orchestrator | ok 2026-03-17 00:00:39.947815 | 2026-03-17 00:00:39.947918 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-17 00:00:41.422047 | orchestrator -> localhost | changed 2026-03-17 00:00:41.427066 | 2026-03-17 00:00:41.427149 | TASK [validate-host : Collect information about zuul worker] 2026-03-17 00:00:42.082920 | orchestrator | ok 2026-03-17 00:00:42.087232 | 2026-03-17 00:00:42.087316 | TASK [validate-host : Write out all zuul information for each host] 2026-03-17 00:00:43.455148 | orchestrator -> localhost | changed 2026-03-17 00:00:43.464835 | 2026-03-17 00:00:43.464922 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-17 00:00:43.783473 | orchestrator | ok 2026-03-17 00:00:43.791537 | 2026-03-17 00:00:43.791642 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-17 00:02:14.969983 | orchestrator | changed: 2026-03-17 00:02:14.970267 | orchestrator | .d..t...... src/ 2026-03-17 00:02:14.970307 | orchestrator | .d..t...... src/github.com/ 2026-03-17 00:02:14.970333 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-17 00:02:14.970355 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-17 00:02:14.970377 | orchestrator | RedHat.yml 2026-03-17 00:02:15.000198 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-17 00:02:15.000216 | orchestrator | RedHat.yml 2026-03-17 00:02:15.000269 | orchestrator | = 1.53.0"... 2026-03-17 00:02:26.342301 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-03-17 00:02:26.358096 | orchestrator | - Finding latest version of hashicorp/null... 2026-03-17 00:02:26.710612 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-17 00:02:27.440738 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-17 00:02:27.496405 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-17 00:02:28.111876 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-17 00:02:28.164230 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-17 00:02:28.622999 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-17 00:02:28.623060 | orchestrator | 2026-03-17 00:02:28.623068 | orchestrator | Providers are signed by their developers. 2026-03-17 00:02:28.623074 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-17 00:02:28.623079 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-17 00:02:28.623085 | orchestrator | 2026-03-17 00:02:28.623090 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-17 00:02:28.623094 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-17 00:02:28.623106 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-17 00:02:28.623110 | orchestrator | you run "tofu init" in the future. 2026-03-17 00:02:28.623336 | orchestrator | 2026-03-17 00:02:28.623348 | orchestrator | OpenTofu has been successfully initialized! 2026-03-17 00:02:28.623353 | orchestrator | 2026-03-17 00:02:28.623357 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-17 00:02:28.623364 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-17 00:02:28.623368 | orchestrator | should now work. 2026-03-17 00:02:28.623372 | orchestrator | 2026-03-17 00:02:28.623389 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-17 00:02:28.623393 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-17 00:02:28.623401 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-17 00:02:29.186220 | orchestrator | Created and switched to workspace "ci"! 2026-03-17 00:02:29.186316 | orchestrator | 2026-03-17 00:02:29.186331 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-17 00:02:29.186343 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-17 00:02:29.186353 | orchestrator | for this configuration. 2026-03-17 00:02:29.352188 | orchestrator | ci.auto.tfvars 2026-03-17 00:02:30.197554 | orchestrator | default_custom.tf 2026-03-17 00:02:32.183465 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-17 00:02:32.715198 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-17 00:02:32.979689 | orchestrator | 2026-03-17 00:02:32.979752 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-17 00:02:32.979760 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-17 00:02:32.979836 | orchestrator | + create 2026-03-17 00:02:32.979854 | orchestrator | <= read (data resources) 2026-03-17 00:02:32.979867 | orchestrator | 2026-03-17 00:02:32.979872 | orchestrator | OpenTofu will perform the following actions: 2026-03-17 00:02:32.984265 | orchestrator | 2026-03-17 00:02:32.984306 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-17 00:02:32.984312 | orchestrator | # (config refers to values not yet known) 2026-03-17 00:02:32.984317 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-17 00:02:32.984322 | orchestrator | + checksum = (known after apply) 2026-03-17 00:02:32.984327 | orchestrator | + created_at = (known after apply) 2026-03-17 00:02:32.984331 | orchestrator | + file = (known after apply) 2026-03-17 00:02:32.984335 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.984358 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.984362 | orchestrator | + min_disk_gb = (known after apply) 2026-03-17 00:02:32.984366 | orchestrator | + min_ram_mb = (known after apply) 2026-03-17 00:02:32.984370 | orchestrator | + most_recent = true 2026-03-17 00:02:32.984375 | orchestrator | + name = (known after apply) 2026-03-17 00:02:32.984379 | orchestrator | + protected = (known after apply) 2026-03-17 00:02:32.984383 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.984389 | orchestrator | + schema = (known after apply) 2026-03-17 00:02:32.984393 | orchestrator | + size_bytes = (known after apply) 2026-03-17 00:02:32.984397 | orchestrator | + tags = (known after apply) 2026-03-17 00:02:32.984401 | orchestrator | + updated_at = (known after apply) 2026-03-17 00:02:32.984405 | orchestrator | } 2026-03-17 00:02:32.984408 | orchestrator | 2026-03-17 00:02:32.984412 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-17 00:02:32.984417 | orchestrator | # (config refers to values not yet known) 2026-03-17 00:02:32.984420 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-17 00:02:32.984424 | orchestrator | + checksum = (known after apply) 2026-03-17 00:02:32.984428 | orchestrator | + created_at = (known after apply) 2026-03-17 00:02:32.984432 | orchestrator | + file = (known after apply) 2026-03-17 00:02:32.984436 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.984439 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.984443 | orchestrator | + min_disk_gb = (known after apply) 2026-03-17 00:02:32.984447 | orchestrator | + min_ram_mb = (known after apply) 2026-03-17 00:02:32.984450 | orchestrator | + most_recent = true 2026-03-17 00:02:32.984454 | orchestrator | + name = (known after apply) 2026-03-17 00:02:32.984458 | orchestrator | + protected = (known after apply) 2026-03-17 00:02:32.984462 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.984465 | orchestrator | + schema = (known after apply) 2026-03-17 00:02:32.984469 | orchestrator | + size_bytes = (known after apply) 2026-03-17 00:02:32.984472 | orchestrator | + tags = (known after apply) 2026-03-17 00:02:32.984476 | orchestrator | + updated_at = (known after apply) 2026-03-17 00:02:32.984480 | orchestrator | } 2026-03-17 00:02:32.984484 | orchestrator | 2026-03-17 00:02:32.984487 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-17 00:02:32.984491 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-17 00:02:32.984495 | orchestrator | + content = (known after apply) 2026-03-17 00:02:32.984499 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-17 00:02:32.984503 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-17 00:02:32.984507 | orchestrator | + content_md5 = (known after apply) 2026-03-17 00:02:32.984510 | orchestrator | + content_sha1 = (known after apply) 2026-03-17 00:02:32.984514 | orchestrator | + content_sha256 = (known after apply) 2026-03-17 00:02:32.984518 | orchestrator | + content_sha512 = (known after apply) 2026-03-17 00:02:32.984521 | orchestrator | + directory_permission = "0777" 2026-03-17 00:02:32.984525 | orchestrator | + file_permission = "0644" 2026-03-17 00:02:32.984529 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-17 00:02:32.984532 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.984536 | orchestrator | } 2026-03-17 00:02:32.984540 | orchestrator | 2026-03-17 00:02:32.984543 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-17 00:02:32.984547 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-17 00:02:32.984551 | orchestrator | + content = (known after apply) 2026-03-17 00:02:32.984554 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-17 00:02:32.984558 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-17 00:02:32.984562 | orchestrator | + content_md5 = (known after apply) 2026-03-17 00:02:32.984565 | orchestrator | + content_sha1 = (known after apply) 2026-03-17 00:02:32.984569 | orchestrator | + content_sha256 = (known after apply) 2026-03-17 00:02:32.984573 | orchestrator | + content_sha512 = (known after apply) 2026-03-17 00:02:32.984576 | orchestrator | + directory_permission = "0777" 2026-03-17 00:02:32.984580 | orchestrator | + file_permission = "0644" 2026-03-17 00:02:32.984598 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-17 00:02:32.984602 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.984605 | orchestrator | } 2026-03-17 00:02:32.984609 | orchestrator | 2026-03-17 00:02:32.984618 | orchestrator | # local_file.inventory will be created 2026-03-17 00:02:32.984622 | orchestrator | + resource "local_file" "inventory" { 2026-03-17 00:02:32.984626 | orchestrator | + content = (known after apply) 2026-03-17 00:02:32.984630 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-17 00:02:32.984633 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-17 00:02:32.984637 | orchestrator | + content_md5 = (known after apply) 2026-03-17 00:02:32.984641 | orchestrator | + content_sha1 = (known after apply) 2026-03-17 00:02:32.984644 | orchestrator | + content_sha256 = (known after apply) 2026-03-17 00:02:32.984648 | orchestrator | + content_sha512 = (known after apply) 2026-03-17 00:02:32.984652 | orchestrator | + directory_permission = "0777" 2026-03-17 00:02:32.984655 | orchestrator | + file_permission = "0644" 2026-03-17 00:02:32.984659 | orchestrator | + filename = "inventory.ci" 2026-03-17 00:02:32.984663 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.984666 | orchestrator | } 2026-03-17 00:02:32.984670 | orchestrator | 2026-03-17 00:02:32.984674 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-17 00:02:32.984678 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-17 00:02:32.984681 | orchestrator | + content = (sensitive value) 2026-03-17 00:02:32.984685 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-17 00:02:32.984689 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-17 00:02:32.984692 | orchestrator | + content_md5 = (known after apply) 2026-03-17 00:02:32.984696 | orchestrator | + content_sha1 = (known after apply) 2026-03-17 00:02:32.984700 | orchestrator | + content_sha256 = (known after apply) 2026-03-17 00:02:32.984704 | orchestrator | + content_sha512 = (known after apply) 2026-03-17 00:02:32.984707 | orchestrator | + directory_permission = "0700" 2026-03-17 00:02:32.984711 | orchestrator | + file_permission = "0600" 2026-03-17 00:02:32.984715 | orchestrator | + filename = ".id_rsa.ci" 2026-03-17 00:02:32.984718 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.984722 | orchestrator | } 2026-03-17 00:02:32.984726 | orchestrator | 2026-03-17 00:02:32.984736 | orchestrator | # null_resource.node_semaphore will be created 2026-03-17 00:02:32.984740 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-17 00:02:32.984744 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.984748 | orchestrator | } 2026-03-17 00:02:32.984751 | orchestrator | 2026-03-17 00:02:32.984755 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-17 00:02:32.984759 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-17 00:02:32.984763 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:32.984767 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.984771 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.984775 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:32.984796 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.984800 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-17 00:02:32.984804 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.984807 | orchestrator | + size = 80 2026-03-17 00:02:32.984811 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:32.984815 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:32.984818 | orchestrator | } 2026-03-17 00:02:32.984822 | orchestrator | 2026-03-17 00:02:32.984826 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-17 00:02:32.984830 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-17 00:02:32.984833 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:32.984837 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.984841 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.984849 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:32.984853 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.984856 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-17 00:02:32.984860 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.984864 | orchestrator | + size = 80 2026-03-17 00:02:32.984867 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:32.984871 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:32.984875 | orchestrator | } 2026-03-17 00:02:32.984878 | orchestrator | 2026-03-17 00:02:32.984882 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-17 00:02:32.984886 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-17 00:02:32.984890 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:32.984893 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.984897 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.984901 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:32.984904 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.984908 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-17 00:02:32.984912 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.984915 | orchestrator | + size = 80 2026-03-17 00:02:32.984919 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:32.984923 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:32.984926 | orchestrator | } 2026-03-17 00:02:32.984930 | orchestrator | 2026-03-17 00:02:32.984934 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-17 00:02:32.984938 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-17 00:02:32.984941 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:32.984945 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.984949 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.984952 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:32.984956 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.984960 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-17 00:02:32.984963 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.984967 | orchestrator | + size = 80 2026-03-17 00:02:32.984971 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:32.984974 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:32.984978 | orchestrator | } 2026-03-17 00:02:32.984982 | orchestrator | 2026-03-17 00:02:32.984985 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-17 00:02:32.984989 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-17 00:02:32.984993 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:32.984997 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.985000 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.985004 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:32.985008 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.985014 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-17 00:02:32.985017 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.985021 | orchestrator | + size = 80 2026-03-17 00:02:32.985025 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:32.985028 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:32.985032 | orchestrator | } 2026-03-17 00:02:32.985036 | orchestrator | 2026-03-17 00:02:32.985040 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-17 00:02:32.985043 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-17 00:02:32.985047 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:32.985051 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.985055 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.985062 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:32.985066 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.985069 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-17 00:02:32.985073 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.985077 | orchestrator | + size = 80 2026-03-17 00:02:32.985080 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:32.985084 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:32.985088 | orchestrator | } 2026-03-17 00:02:32.985091 | orchestrator | 2026-03-17 00:02:32.985095 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-17 00:02:32.985099 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-17 00:02:32.985103 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:32.985106 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.985110 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.985116 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:32.985120 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.985124 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-17 00:02:32.985128 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.985132 | orchestrator | + size = 80 2026-03-17 00:02:32.985135 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:32.985139 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:32.985143 | orchestrator | } 2026-03-17 00:02:32.985146 | orchestrator | 2026-03-17 00:02:32.985150 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-17 00:02:32.985154 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-17 00:02:32.985158 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:32.985161 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.985165 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.985169 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.985173 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-17 00:02:32.985176 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.985180 | orchestrator | + size = 20 2026-03-17 00:02:32.985184 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:32.985187 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:32.985191 | orchestrator | } 2026-03-17 00:02:32.985195 | orchestrator | 2026-03-17 00:02:32.985199 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-17 00:02:32.985202 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-17 00:02:32.985206 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:32.985210 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.985213 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.985217 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.985221 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-17 00:02:32.985224 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.985228 | orchestrator | + size = 20 2026-03-17 00:02:32.985232 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:32.985236 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:32.985239 | orchestrator | } 2026-03-17 00:02:32.985243 | orchestrator | 2026-03-17 00:02:32.985247 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-17 00:02:32.985251 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-17 00:02:32.985254 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:32.985258 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.985262 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.985265 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.985269 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-17 00:02:32.985273 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.985280 | orchestrator | + size = 20 2026-03-17 00:02:32.985283 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:32.985287 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:32.985291 | orchestrator | } 2026-03-17 00:02:32.985295 | orchestrator | 2026-03-17 00:02:32.985299 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-17 00:02:32.985302 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-17 00:02:32.985306 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:32.985310 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.985313 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.985317 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.985321 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-17 00:02:32.985324 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.985328 | orchestrator | + size = 20 2026-03-17 00:02:32.985332 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:32.985336 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:32.985339 | orchestrator | } 2026-03-17 00:02:32.985343 | orchestrator | 2026-03-17 00:02:32.985347 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-17 00:02:32.985350 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-17 00:02:32.985354 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:32.985358 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.985362 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.985365 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.985369 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-17 00:02:32.985373 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.985379 | orchestrator | + size = 20 2026-03-17 00:02:32.985383 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:32.985386 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:32.985390 | orchestrator | } 2026-03-17 00:02:32.985394 | orchestrator | 2026-03-17 00:02:32.985398 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-17 00:02:32.985401 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-17 00:02:32.985405 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:32.985409 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.985413 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.985416 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.985420 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-17 00:02:32.985424 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.985427 | orchestrator | + size = 20 2026-03-17 00:02:32.985431 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:32.985435 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:32.985438 | orchestrator | } 2026-03-17 00:02:32.985442 | orchestrator | 2026-03-17 00:02:32.985446 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-17 00:02:32.985450 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-17 00:02:32.985453 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:32.985457 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.985461 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.985464 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.985468 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-17 00:02:32.985472 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.985476 | orchestrator | + size = 20 2026-03-17 00:02:32.985482 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:32.985486 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:32.985489 | orchestrator | } 2026-03-17 00:02:32.985493 | orchestrator | 2026-03-17 00:02:32.985497 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-17 00:02:32.985501 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-17 00:02:32.985508 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:32.985511 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.985515 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.985519 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.985522 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-17 00:02:32.985526 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.985530 | orchestrator | + size = 20 2026-03-17 00:02:32.985534 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:32.985537 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:32.985541 | orchestrator | } 2026-03-17 00:02:32.985545 | orchestrator | 2026-03-17 00:02:32.985549 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-17 00:02:32.985552 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-17 00:02:32.985556 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:32.985560 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.985563 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.985567 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.985571 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-17 00:02:32.985574 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.985578 | orchestrator | + size = 20 2026-03-17 00:02:32.985582 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:32.985585 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:32.985589 | orchestrator | } 2026-03-17 00:02:32.985593 | orchestrator | 2026-03-17 00:02:32.985597 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-17 00:02:32.985600 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-17 00:02:32.985604 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-17 00:02:32.985608 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-17 00:02:32.985612 | orchestrator | + all_metadata = (known after apply) 2026-03-17 00:02:32.985615 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.985619 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.985623 | orchestrator | + config_drive = true 2026-03-17 00:02:32.985626 | orchestrator | + created = (known after apply) 2026-03-17 00:02:32.985630 | orchestrator | + flavor_id = (known after apply) 2026-03-17 00:02:32.985634 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-17 00:02:32.985637 | orchestrator | + force_delete = false 2026-03-17 00:02:32.985641 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-17 00:02:32.985645 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.985648 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:32.985652 | orchestrator | + image_name = (known after apply) 2026-03-17 00:02:32.985656 | orchestrator | + key_pair = "testbed" 2026-03-17 00:02:32.985659 | orchestrator | + name = "testbed-manager" 2026-03-17 00:02:32.985663 | orchestrator | + power_state = "active" 2026-03-17 00:02:32.985667 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.985671 | orchestrator | + security_groups = (known after apply) 2026-03-17 00:02:32.985674 | orchestrator | + stop_before_destroy = false 2026-03-17 00:02:32.985678 | orchestrator | + updated = (known after apply) 2026-03-17 00:02:32.985682 | orchestrator | + user_data = (sensitive value) 2026-03-17 00:02:32.985685 | orchestrator | 2026-03-17 00:02:32.985689 | orchestrator | + block_device { 2026-03-17 00:02:32.985693 | orchestrator | + boot_index = 0 2026-03-17 00:02:32.985697 | orchestrator | + delete_on_termination = false 2026-03-17 00:02:32.985703 | orchestrator | + destination_type = "volume" 2026-03-17 00:02:32.985707 | orchestrator | + multiattach = false 2026-03-17 00:02:32.985710 | orchestrator | + source_type = "volume" 2026-03-17 00:02:32.985714 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:32.985721 | orchestrator | } 2026-03-17 00:02:32.985725 | orchestrator | 2026-03-17 00:02:32.985729 | orchestrator | + network { 2026-03-17 00:02:32.985732 | orchestrator | + access_network = false 2026-03-17 00:02:32.985736 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-17 00:02:32.985740 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-17 00:02:32.985744 | orchestrator | + mac = (known after apply) 2026-03-17 00:02:32.985747 | orchestrator | + name = (known after apply) 2026-03-17 00:02:32.985751 | orchestrator | + port = (known after apply) 2026-03-17 00:02:32.985755 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:32.985758 | orchestrator | } 2026-03-17 00:02:32.985762 | orchestrator | } 2026-03-17 00:02:32.985766 | orchestrator | 2026-03-17 00:02:32.985770 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-17 00:02:32.985773 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-17 00:02:32.985809 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-17 00:02:32.985814 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-17 00:02:32.985818 | orchestrator | + all_metadata = (known after apply) 2026-03-17 00:02:32.985822 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.985825 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.985829 | orchestrator | + config_drive = true 2026-03-17 00:02:32.985833 | orchestrator | + created = (known after apply) 2026-03-17 00:02:32.985836 | orchestrator | + flavor_id = (known after apply) 2026-03-17 00:02:32.985840 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-17 00:02:32.985844 | orchestrator | + force_delete = false 2026-03-17 00:02:32.985847 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-17 00:02:32.985851 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.985855 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:32.985859 | orchestrator | + image_name = (known after apply) 2026-03-17 00:02:32.985862 | orchestrator | + key_pair = "testbed" 2026-03-17 00:02:32.985866 | orchestrator | + name = "testbed-node-0" 2026-03-17 00:02:32.985870 | orchestrator | + power_state = "active" 2026-03-17 00:02:32.985873 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.985877 | orchestrator | + security_groups = (known after apply) 2026-03-17 00:02:32.985881 | orchestrator | + stop_before_destroy = false 2026-03-17 00:02:32.985885 | orchestrator | + updated = (known after apply) 2026-03-17 00:02:32.985891 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-17 00:02:32.985895 | orchestrator | 2026-03-17 00:02:32.985899 | orchestrator | + block_device { 2026-03-17 00:02:32.985902 | orchestrator | + boot_index = 0 2026-03-17 00:02:32.985906 | orchestrator | + delete_on_termination = false 2026-03-17 00:02:32.985910 | orchestrator | + destination_type = "volume" 2026-03-17 00:02:32.985913 | orchestrator | + multiattach = false 2026-03-17 00:02:32.985917 | orchestrator | + source_type = "volume" 2026-03-17 00:02:32.985921 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:32.985925 | orchestrator | } 2026-03-17 00:02:32.985928 | orchestrator | 2026-03-17 00:02:32.985932 | orchestrator | + network { 2026-03-17 00:02:32.985936 | orchestrator | + access_network = false 2026-03-17 00:02:32.985940 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-17 00:02:32.985944 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-17 00:02:32.985947 | orchestrator | + mac = (known after apply) 2026-03-17 00:02:32.985951 | orchestrator | + name = (known after apply) 2026-03-17 00:02:32.985955 | orchestrator | + port = (known after apply) 2026-03-17 00:02:32.985958 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:32.985962 | orchestrator | } 2026-03-17 00:02:32.985966 | orchestrator | } 2026-03-17 00:02:32.985969 | orchestrator | 2026-03-17 00:02:32.985973 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-17 00:02:32.985977 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-17 00:02:32.985981 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-17 00:02:32.985988 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-17 00:02:32.985991 | orchestrator | + all_metadata = (known after apply) 2026-03-17 00:02:32.985995 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.985999 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.986002 | orchestrator | + config_drive = true 2026-03-17 00:02:32.986006 | orchestrator | + created = (known after apply) 2026-03-17 00:02:32.986010 | orchestrator | + flavor_id = (known after apply) 2026-03-17 00:02:32.986032 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-17 00:02:32.986036 | orchestrator | + force_delete = false 2026-03-17 00:02:32.986040 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-17 00:02:32.986043 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.986047 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:32.986051 | orchestrator | + image_name = (known after apply) 2026-03-17 00:02:32.986055 | orchestrator | + key_pair = "testbed" 2026-03-17 00:02:32.986058 | orchestrator | + name = "testbed-node-1" 2026-03-17 00:02:32.986062 | orchestrator | + power_state = "active" 2026-03-17 00:02:32.986066 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.986069 | orchestrator | + security_groups = (known after apply) 2026-03-17 00:02:32.986073 | orchestrator | + stop_before_destroy = false 2026-03-17 00:02:32.986077 | orchestrator | + updated = (known after apply) 2026-03-17 00:02:32.986080 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-17 00:02:32.986084 | orchestrator | 2026-03-17 00:02:32.986093 | orchestrator | + block_device { 2026-03-17 00:02:32.986097 | orchestrator | + boot_index = 0 2026-03-17 00:02:32.986101 | orchestrator | + delete_on_termination = false 2026-03-17 00:02:32.986104 | orchestrator | + destination_type = "volume" 2026-03-17 00:02:32.986108 | orchestrator | + multiattach = false 2026-03-17 00:02:32.986112 | orchestrator | + source_type = "volume" 2026-03-17 00:02:32.986115 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:32.986119 | orchestrator | } 2026-03-17 00:02:32.986123 | orchestrator | 2026-03-17 00:02:32.986127 | orchestrator | + network { 2026-03-17 00:02:32.986130 | orchestrator | + access_network = false 2026-03-17 00:02:32.986134 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-17 00:02:32.986138 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-17 00:02:32.986141 | orchestrator | + mac = (known after apply) 2026-03-17 00:02:32.986145 | orchestrator | + name = (known after apply) 2026-03-17 00:02:32.986149 | orchestrator | + port = (known after apply) 2026-03-17 00:02:32.986152 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:32.986156 | orchestrator | } 2026-03-17 00:02:32.986160 | orchestrator | } 2026-03-17 00:02:32.986164 | orchestrator | 2026-03-17 00:02:32.986167 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-17 00:02:32.986171 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-17 00:02:32.986175 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-17 00:02:32.986179 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-17 00:02:32.986183 | orchestrator | + all_metadata = (known after apply) 2026-03-17 00:02:32.986186 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.986193 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.986196 | orchestrator | + config_drive = true 2026-03-17 00:02:32.986200 | orchestrator | + created = (known after apply) 2026-03-17 00:02:32.986204 | orchestrator | + flavor_id = (known after apply) 2026-03-17 00:02:32.986207 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-17 00:02:32.986211 | orchestrator | + force_delete = false 2026-03-17 00:02:32.986215 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-17 00:02:32.986218 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.986222 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:32.986229 | orchestrator | + image_name = (known after apply) 2026-03-17 00:02:32.986233 | orchestrator | + key_pair = "testbed" 2026-03-17 00:02:32.986236 | orchestrator | + name = "testbed-node-2" 2026-03-17 00:02:32.986240 | orchestrator | + power_state = "active" 2026-03-17 00:02:32.986244 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.986247 | orchestrator | + security_groups = (known after apply) 2026-03-17 00:02:32.986251 | orchestrator | + stop_before_destroy = false 2026-03-17 00:02:32.986255 | orchestrator | + updated = (known after apply) 2026-03-17 00:02:32.986258 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-17 00:02:32.986262 | orchestrator | 2026-03-17 00:02:32.986266 | orchestrator | + block_device { 2026-03-17 00:02:32.986270 | orchestrator | + boot_index = 0 2026-03-17 00:02:32.986273 | orchestrator | + delete_on_termination = false 2026-03-17 00:02:32.986277 | orchestrator | + destination_type = "volume" 2026-03-17 00:02:32.986281 | orchestrator | + multiattach = false 2026-03-17 00:02:32.986284 | orchestrator | + source_type = "volume" 2026-03-17 00:02:32.986288 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:32.986292 | orchestrator | } 2026-03-17 00:02:32.986296 | orchestrator | 2026-03-17 00:02:32.986299 | orchestrator | + network { 2026-03-17 00:02:32.986303 | orchestrator | + access_network = false 2026-03-17 00:02:32.986307 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-17 00:02:32.986313 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-17 00:02:32.986317 | orchestrator | + mac = (known after apply) 2026-03-17 00:02:32.986321 | orchestrator | + name = (known after apply) 2026-03-17 00:02:32.986325 | orchestrator | + port = (known after apply) 2026-03-17 00:02:32.986328 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:32.986332 | orchestrator | } 2026-03-17 00:02:32.986336 | orchestrator | } 2026-03-17 00:02:32.986340 | orchestrator | 2026-03-17 00:02:32.986343 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-17 00:02:32.986347 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-17 00:02:32.986351 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-17 00:02:32.986355 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-17 00:02:32.986358 | orchestrator | + all_metadata = (known after apply) 2026-03-17 00:02:32.986362 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.986366 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.986369 | orchestrator | + config_drive = true 2026-03-17 00:02:32.986373 | orchestrator | + created = (known after apply) 2026-03-17 00:02:32.986377 | orchestrator | + flavor_id = (known after apply) 2026-03-17 00:02:32.986380 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-17 00:02:32.986384 | orchestrator | + force_delete = false 2026-03-17 00:02:32.986388 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-17 00:02:32.986391 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.986395 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:32.986399 | orchestrator | + image_name = (known after apply) 2026-03-17 00:02:32.986403 | orchestrator | + key_pair = "testbed" 2026-03-17 00:02:32.986406 | orchestrator | + name = "testbed-node-3" 2026-03-17 00:02:32.986410 | orchestrator | + power_state = "active" 2026-03-17 00:02:32.986414 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.986417 | orchestrator | + security_groups = (known after apply) 2026-03-17 00:02:32.986421 | orchestrator | + stop_before_destroy = false 2026-03-17 00:02:32.986425 | orchestrator | + updated = (known after apply) 2026-03-17 00:02:32.986428 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-17 00:02:32.986432 | orchestrator | 2026-03-17 00:02:32.986436 | orchestrator | + block_device { 2026-03-17 00:02:32.986444 | orchestrator | + boot_index = 0 2026-03-17 00:02:32.986448 | orchestrator | + delete_on_termination = false 2026-03-17 00:02:32.986452 | orchestrator | + destination_type = "volume" 2026-03-17 00:02:32.986458 | orchestrator | + multiattach = false 2026-03-17 00:02:32.986462 | orchestrator | + source_type = "volume" 2026-03-17 00:02:32.986466 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:32.986469 | orchestrator | } 2026-03-17 00:02:32.986473 | orchestrator | 2026-03-17 00:02:32.986477 | orchestrator | + network { 2026-03-17 00:02:32.986481 | orchestrator | + access_network = false 2026-03-17 00:02:32.986484 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-17 00:02:32.986488 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-17 00:02:32.986492 | orchestrator | + mac = (known after apply) 2026-03-17 00:02:32.986495 | orchestrator | + name = (known after apply) 2026-03-17 00:02:32.986499 | orchestrator | + port = (known after apply) 2026-03-17 00:02:32.986503 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:32.986506 | orchestrator | } 2026-03-17 00:02:32.986510 | orchestrator | } 2026-03-17 00:02:32.986514 | orchestrator | 2026-03-17 00:02:32.986518 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-17 00:02:32.986521 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-17 00:02:32.986525 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-17 00:02:32.986529 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-17 00:02:32.986533 | orchestrator | + all_metadata = (known after apply) 2026-03-17 00:02:32.986536 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.986540 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.986544 | orchestrator | + config_drive = true 2026-03-17 00:02:32.986547 | orchestrator | + created = (known after apply) 2026-03-17 00:02:32.986551 | orchestrator | + flavor_id = (known after apply) 2026-03-17 00:02:32.986555 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-17 00:02:32.986558 | orchestrator | + force_delete = false 2026-03-17 00:02:32.986562 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-17 00:02:32.986566 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.986569 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:32.986573 | orchestrator | + image_name = (known after apply) 2026-03-17 00:02:32.986577 | orchestrator | + key_pair = "testbed" 2026-03-17 00:02:32.986580 | orchestrator | + name = "testbed-node-4" 2026-03-17 00:02:32.986584 | orchestrator | + power_state = "active" 2026-03-17 00:02:32.986588 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.986591 | orchestrator | + security_groups = (known after apply) 2026-03-17 00:02:32.986595 | orchestrator | + stop_before_destroy = false 2026-03-17 00:02:32.986599 | orchestrator | + updated = (known after apply) 2026-03-17 00:02:32.986602 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-17 00:02:32.986606 | orchestrator | 2026-03-17 00:02:32.986610 | orchestrator | + block_device { 2026-03-17 00:02:32.986614 | orchestrator | + boot_index = 0 2026-03-17 00:02:32.986617 | orchestrator | + delete_on_termination = false 2026-03-17 00:02:32.986621 | orchestrator | + destination_type = "volume" 2026-03-17 00:02:32.986625 | orchestrator | + multiattach = false 2026-03-17 00:02:32.986628 | orchestrator | + source_type = "volume" 2026-03-17 00:02:32.986632 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:32.986636 | orchestrator | } 2026-03-17 00:02:32.986640 | orchestrator | 2026-03-17 00:02:32.986643 | orchestrator | + network { 2026-03-17 00:02:32.986647 | orchestrator | + access_network = false 2026-03-17 00:02:32.986651 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-17 00:02:32.986654 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-17 00:02:32.986658 | orchestrator | + mac = (known after apply) 2026-03-17 00:02:32.986662 | orchestrator | + name = (known after apply) 2026-03-17 00:02:32.986665 | orchestrator | + port = (known after apply) 2026-03-17 00:02:32.986669 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:32.986673 | orchestrator | } 2026-03-17 00:02:32.986677 | orchestrator | } 2026-03-17 00:02:32.986684 | orchestrator | 2026-03-17 00:02:32.986688 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-17 00:02:32.986691 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-17 00:02:32.986695 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-17 00:02:32.986701 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-17 00:02:32.986705 | orchestrator | + all_metadata = (known after apply) 2026-03-17 00:02:32.986709 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.986712 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.986716 | orchestrator | + config_drive = true 2026-03-17 00:02:32.986720 | orchestrator | + created = (known after apply) 2026-03-17 00:02:32.986723 | orchestrator | + flavor_id = (known after apply) 2026-03-17 00:02:32.986727 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-17 00:02:32.986731 | orchestrator | + force_delete = false 2026-03-17 00:02:32.986737 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-17 00:02:32.986741 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.986745 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:32.986748 | orchestrator | + image_name = (known after apply) 2026-03-17 00:02:32.986752 | orchestrator | + key_pair = "testbed" 2026-03-17 00:02:32.986756 | orchestrator | + name = "testbed-node-5" 2026-03-17 00:02:32.986759 | orchestrator | + power_state = "active" 2026-03-17 00:02:32.986763 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.986767 | orchestrator | + security_groups = (known after apply) 2026-03-17 00:02:32.986770 | orchestrator | + stop_before_destroy = false 2026-03-17 00:02:32.986774 | orchestrator | + updated = (known after apply) 2026-03-17 00:02:32.986792 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-17 00:02:32.986796 | orchestrator | 2026-03-17 00:02:32.986800 | orchestrator | + block_device { 2026-03-17 00:02:32.986804 | orchestrator | + boot_index = 0 2026-03-17 00:02:32.986808 | orchestrator | + delete_on_termination = false 2026-03-17 00:02:32.986811 | orchestrator | + destination_type = "volume" 2026-03-17 00:02:32.986815 | orchestrator | + multiattach = false 2026-03-17 00:02:32.986819 | orchestrator | + source_type = "volume" 2026-03-17 00:02:32.986822 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:32.986826 | orchestrator | } 2026-03-17 00:02:32.986830 | orchestrator | 2026-03-17 00:02:32.986834 | orchestrator | + network { 2026-03-17 00:02:32.986837 | orchestrator | + access_network = false 2026-03-17 00:02:32.986841 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-17 00:02:32.986845 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-17 00:02:32.986848 | orchestrator | + mac = (known after apply) 2026-03-17 00:02:32.986852 | orchestrator | + name = (known after apply) 2026-03-17 00:02:32.986856 | orchestrator | + port = (known after apply) 2026-03-17 00:02:32.986860 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:32.986864 | orchestrator | } 2026-03-17 00:02:32.986867 | orchestrator | } 2026-03-17 00:02:32.986871 | orchestrator | 2026-03-17 00:02:32.986875 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-17 00:02:32.986879 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-17 00:02:32.986882 | orchestrator | + fingerprint = (known after apply) 2026-03-17 00:02:32.986886 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.986890 | orchestrator | + name = "testbed" 2026-03-17 00:02:32.986894 | orchestrator | + private_key = (sensitive value) 2026-03-17 00:02:32.986897 | orchestrator | + public_key = (known after apply) 2026-03-17 00:02:32.986901 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.986905 | orchestrator | + user_id = (known after apply) 2026-03-17 00:02:32.986909 | orchestrator | } 2026-03-17 00:02:32.986912 | orchestrator | 2026-03-17 00:02:32.986916 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-17 00:02:32.986920 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-17 00:02:32.986927 | orchestrator | + device = (known after apply) 2026-03-17 00:02:32.986930 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.986934 | orchestrator | + instance_id = (known after apply) 2026-03-17 00:02:32.986938 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.986941 | orchestrator | + volume_id = (known after apply) 2026-03-17 00:02:32.986945 | orchestrator | } 2026-03-17 00:02:32.986949 | orchestrator | 2026-03-17 00:02:32.986953 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-17 00:02:32.986957 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-17 00:02:32.986960 | orchestrator | + device = (known after apply) 2026-03-17 00:02:32.986964 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.986968 | orchestrator | + instance_id = (known after apply) 2026-03-17 00:02:32.986972 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.986975 | orchestrator | + volume_id = (known after apply) 2026-03-17 00:02:32.986979 | orchestrator | } 2026-03-17 00:02:32.986983 | orchestrator | 2026-03-17 00:02:32.986987 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-17 00:02:32.986991 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-17 00:02:32.986994 | orchestrator | + device = (known after apply) 2026-03-17 00:02:32.986998 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.987002 | orchestrator | + instance_id = (known after apply) 2026-03-17 00:02:32.987006 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.987009 | orchestrator | + volume_id = (known after apply) 2026-03-17 00:02:32.987013 | orchestrator | } 2026-03-17 00:02:32.987017 | orchestrator | 2026-03-17 00:02:32.987020 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-17 00:02:32.987024 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-17 00:02:32.987028 | orchestrator | + device = (known after apply) 2026-03-17 00:02:32.987032 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.987036 | orchestrator | + instance_id = (known after apply) 2026-03-17 00:02:32.987039 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.987043 | orchestrator | + volume_id = (known after apply) 2026-03-17 00:02:32.987047 | orchestrator | } 2026-03-17 00:02:32.987051 | orchestrator | 2026-03-17 00:02:32.987054 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-17 00:02:32.987058 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-17 00:02:32.987062 | orchestrator | + device = (known after apply) 2026-03-17 00:02:32.987066 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.987073 | orchestrator | + instance_id = (known after apply) 2026-03-17 00:02:32.987079 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.987083 | orchestrator | + volume_id = (known after apply) 2026-03-17 00:02:32.987086 | orchestrator | } 2026-03-17 00:02:32.987090 | orchestrator | 2026-03-17 00:02:32.987094 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-17 00:02:32.987098 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-17 00:02:32.987103 | orchestrator | + device = (known after apply) 2026-03-17 00:02:32.987107 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.987111 | orchestrator | + instance_id = (known after apply) 2026-03-17 00:02:32.987115 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.987118 | orchestrator | + volume_id = (known after apply) 2026-03-17 00:02:32.987122 | orchestrator | } 2026-03-17 00:02:32.987126 | orchestrator | 2026-03-17 00:02:32.987129 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-17 00:02:32.987133 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-17 00:02:32.987137 | orchestrator | + device = (known after apply) 2026-03-17 00:02:32.987141 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.987144 | orchestrator | + instance_id = (known after apply) 2026-03-17 00:02:32.987148 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.987154 | orchestrator | + volume_id = (known after apply) 2026-03-17 00:02:32.987158 | orchestrator | } 2026-03-17 00:02:32.987161 | orchestrator | 2026-03-17 00:02:32.987165 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-17 00:02:32.987169 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-17 00:02:32.987173 | orchestrator | + device = (known after apply) 2026-03-17 00:02:32.987176 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.987180 | orchestrator | + instance_id = (known after apply) 2026-03-17 00:02:32.987184 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.987187 | orchestrator | + volume_id = (known after apply) 2026-03-17 00:02:32.987191 | orchestrator | } 2026-03-17 00:02:32.987195 | orchestrator | 2026-03-17 00:02:32.987199 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-17 00:02:32.987203 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-17 00:02:32.987206 | orchestrator | + device = (known after apply) 2026-03-17 00:02:32.987210 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.987214 | orchestrator | + instance_id = (known after apply) 2026-03-17 00:02:32.987217 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.987221 | orchestrator | + volume_id = (known after apply) 2026-03-17 00:02:32.987225 | orchestrator | } 2026-03-17 00:02:32.987228 | orchestrator | 2026-03-17 00:02:32.987232 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-17 00:02:32.987236 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-17 00:02:32.987240 | orchestrator | + fixed_ip = (known after apply) 2026-03-17 00:02:32.987244 | orchestrator | + floating_ip = (known after apply) 2026-03-17 00:02:32.987248 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.987251 | orchestrator | + port_id = (known after apply) 2026-03-17 00:02:32.987255 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.987259 | orchestrator | } 2026-03-17 00:02:32.987262 | orchestrator | 2026-03-17 00:02:32.987266 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-17 00:02:32.987270 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-17 00:02:32.987274 | orchestrator | + address = (known after apply) 2026-03-17 00:02:32.987277 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.987281 | orchestrator | + dns_domain = (known after apply) 2026-03-17 00:02:32.987285 | orchestrator | + dns_name = (known after apply) 2026-03-17 00:02:32.987288 | orchestrator | + fixed_ip = (known after apply) 2026-03-17 00:02:32.987292 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.987296 | orchestrator | + pool = "public" 2026-03-17 00:02:32.987300 | orchestrator | + port_id = (known after apply) 2026-03-17 00:02:32.987303 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.987307 | orchestrator | + subnet_id = (known after apply) 2026-03-17 00:02:32.987311 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.987314 | orchestrator | } 2026-03-17 00:02:32.987318 | orchestrator | 2026-03-17 00:02:32.987322 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-17 00:02:32.987325 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-17 00:02:32.987329 | orchestrator | + admin_state_up = (known after apply) 2026-03-17 00:02:32.987333 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.987336 | orchestrator | + availability_zone_hints = [ 2026-03-17 00:02:32.987340 | orchestrator | + "nova", 2026-03-17 00:02:32.987344 | orchestrator | ] 2026-03-17 00:02:32.987348 | orchestrator | + dns_domain = (known after apply) 2026-03-17 00:02:32.987351 | orchestrator | + external = (known after apply) 2026-03-17 00:02:32.987355 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.987359 | orchestrator | + mtu = (known after apply) 2026-03-17 00:02:32.987362 | orchestrator | + name = "net-testbed-management" 2026-03-17 00:02:32.987366 | orchestrator | + port_security_enabled = (known after apply) 2026-03-17 00:02:32.987373 | orchestrator | + qos_policy_id = (known after apply) 2026-03-17 00:02:32.987377 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.987381 | orchestrator | + shared = (known after apply) 2026-03-17 00:02:32.987385 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.987388 | orchestrator | + transparent_vlan = (known after apply) 2026-03-17 00:02:32.987392 | orchestrator | 2026-03-17 00:02:32.987396 | orchestrator | + segments (known after apply) 2026-03-17 00:02:32.987399 | orchestrator | } 2026-03-17 00:02:32.987403 | orchestrator | 2026-03-17 00:02:32.987407 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-17 00:02:32.987411 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-17 00:02:32.987414 | orchestrator | + admin_state_up = (known after apply) 2026-03-17 00:02:32.987418 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-17 00:02:32.987422 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-17 00:02:32.987428 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.987431 | orchestrator | + device_id = (known after apply) 2026-03-17 00:02:32.987435 | orchestrator | + device_owner = (known after apply) 2026-03-17 00:02:32.987439 | orchestrator | + dns_assignment = (known after apply) 2026-03-17 00:02:32.987443 | orchestrator | + dns_name = (known after apply) 2026-03-17 00:02:32.987446 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.987450 | orchestrator | + mac_address = (known after apply) 2026-03-17 00:02:32.987454 | orchestrator | + network_id = (known after apply) 2026-03-17 00:02:32.987457 | orchestrator | + port_security_enabled = (known after apply) 2026-03-17 00:02:32.987465 | orchestrator | + qos_policy_id = (known after apply) 2026-03-17 00:02:32.987469 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.987472 | orchestrator | + security_group_ids = (known after apply) 2026-03-17 00:02:32.987476 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.987480 | orchestrator | 2026-03-17 00:02:32.987484 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.987487 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-17 00:02:32.987491 | orchestrator | } 2026-03-17 00:02:32.987495 | orchestrator | 2026-03-17 00:02:32.987499 | orchestrator | + binding (known after apply) 2026-03-17 00:02:32.987502 | orchestrator | 2026-03-17 00:02:32.987506 | orchestrator | + fixed_ip { 2026-03-17 00:02:32.987510 | orchestrator | + ip_address = "192.168.16.5" 2026-03-17 00:02:32.987514 | orchestrator | + subnet_id = (known after apply) 2026-03-17 00:02:32.987517 | orchestrator | } 2026-03-17 00:02:32.987521 | orchestrator | } 2026-03-17 00:02:32.987525 | orchestrator | 2026-03-17 00:02:32.987529 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-17 00:02:32.987532 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-17 00:02:32.987536 | orchestrator | + admin_state_up = (known after apply) 2026-03-17 00:02:32.987540 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-17 00:02:32.987543 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-17 00:02:32.987547 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.987551 | orchestrator | + device_id = (known after apply) 2026-03-17 00:02:32.987555 | orchestrator | + device_owner = (known after apply) 2026-03-17 00:02:32.987558 | orchestrator | + dns_assignment = (known after apply) 2026-03-17 00:02:32.987562 | orchestrator | + dns_name = (known after apply) 2026-03-17 00:02:32.987566 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.987569 | orchestrator | + mac_address = (known after apply) 2026-03-17 00:02:32.987573 | orchestrator | + network_id = (known after apply) 2026-03-17 00:02:32.987577 | orchestrator | + port_security_enabled = (known after apply) 2026-03-17 00:02:32.987580 | orchestrator | + qos_policy_id = (known after apply) 2026-03-17 00:02:32.987584 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.987590 | orchestrator | + security_group_ids = (known after apply) 2026-03-17 00:02:32.987594 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.987598 | orchestrator | 2026-03-17 00:02:32.987602 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.987605 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-17 00:02:32.987609 | orchestrator | } 2026-03-17 00:02:32.987613 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.987617 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-17 00:02:32.987620 | orchestrator | } 2026-03-17 00:02:32.987624 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.987628 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-17 00:02:32.987632 | orchestrator | } 2026-03-17 00:02:32.987635 | orchestrator | 2026-03-17 00:02:32.987639 | orchestrator | + binding (known after apply) 2026-03-17 00:02:32.987643 | orchestrator | 2026-03-17 00:02:32.987647 | orchestrator | + fixed_ip { 2026-03-17 00:02:32.987650 | orchestrator | + ip_address = "192.168.16.10" 2026-03-17 00:02:32.987654 | orchestrator | + subnet_id = (known after apply) 2026-03-17 00:02:32.987658 | orchestrator | } 2026-03-17 00:02:32.987662 | orchestrator | } 2026-03-17 00:02:32.987665 | orchestrator | 2026-03-17 00:02:32.987669 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-17 00:02:32.987673 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-17 00:02:32.987676 | orchestrator | + admin_state_up = (known after apply) 2026-03-17 00:02:32.987680 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-17 00:02:32.987684 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-17 00:02:32.987688 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.987691 | orchestrator | + device_id = (known after apply) 2026-03-17 00:02:32.987695 | orchestrator | + device_owner = (known after apply) 2026-03-17 00:02:32.987699 | orchestrator | + dns_assignment = (known after apply) 2026-03-17 00:02:32.987702 | orchestrator | + dns_name = (known after apply) 2026-03-17 00:02:32.987706 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.987710 | orchestrator | + mac_address = (known after apply) 2026-03-17 00:02:32.987714 | orchestrator | + network_id = (known after apply) 2026-03-17 00:02:32.987717 | orchestrator | + port_security_enabled = (known after apply) 2026-03-17 00:02:32.987721 | orchestrator | + qos_policy_id = (known after apply) 2026-03-17 00:02:32.987725 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.987728 | orchestrator | + security_group_ids = (known after apply) 2026-03-17 00:02:32.987732 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.987736 | orchestrator | 2026-03-17 00:02:32.987739 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.987743 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-17 00:02:32.987747 | orchestrator | } 2026-03-17 00:02:32.987751 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.987754 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-17 00:02:32.987758 | orchestrator | } 2026-03-17 00:02:32.987762 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.987765 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-17 00:02:32.987769 | orchestrator | } 2026-03-17 00:02:32.987773 | orchestrator | 2026-03-17 00:02:32.987777 | orchestrator | + binding (known after apply) 2026-03-17 00:02:32.987801 | orchestrator | 2026-03-17 00:02:32.987804 | orchestrator | + fixed_ip { 2026-03-17 00:02:32.987808 | orchestrator | + ip_address = "192.168.16.11" 2026-03-17 00:02:32.987812 | orchestrator | + subnet_id = (known after apply) 2026-03-17 00:02:32.987816 | orchestrator | } 2026-03-17 00:02:32.987819 | orchestrator | } 2026-03-17 00:02:32.987823 | orchestrator | 2026-03-17 00:02:32.987827 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-17 00:02:32.987831 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-17 00:02:32.987834 | orchestrator | + admin_state_up = (known after apply) 2026-03-17 00:02:32.987838 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-17 00:02:32.987842 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-17 00:02:32.987846 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.987853 | orchestrator | + device_id = (known after apply) 2026-03-17 00:02:32.987857 | orchestrator | + device_owner = (known after apply) 2026-03-17 00:02:32.987860 | orchestrator | + dns_assignment = (known after apply) 2026-03-17 00:02:32.987864 | orchestrator | + dns_name = (known after apply) 2026-03-17 00:02:32.987870 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.987874 | orchestrator | + mac_address = (known after apply) 2026-03-17 00:02:32.987878 | orchestrator | + network_id = (known after apply) 2026-03-17 00:02:32.987881 | orchestrator | + port_security_enabled = (known after apply) 2026-03-17 00:02:32.987885 | orchestrator | + qos_policy_id = (known after apply) 2026-03-17 00:02:32.987891 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.987895 | orchestrator | + security_group_ids = (known after apply) 2026-03-17 00:02:32.987899 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.987903 | orchestrator | 2026-03-17 00:02:32.987906 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.987910 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-17 00:02:32.987914 | orchestrator | } 2026-03-17 00:02:32.987917 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.987921 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-17 00:02:32.987925 | orchestrator | } 2026-03-17 00:02:32.987929 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.987932 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-17 00:02:32.987936 | orchestrator | } 2026-03-17 00:02:32.987940 | orchestrator | 2026-03-17 00:02:32.987944 | orchestrator | + binding (known after apply) 2026-03-17 00:02:32.987947 | orchestrator | 2026-03-17 00:02:32.987951 | orchestrator | + fixed_ip { 2026-03-17 00:02:32.987955 | orchestrator | + ip_address = "192.168.16.12" 2026-03-17 00:02:32.987958 | orchestrator | + subnet_id = (known after apply) 2026-03-17 00:02:32.987962 | orchestrator | } 2026-03-17 00:02:32.987966 | orchestrator | } 2026-03-17 00:02:32.987970 | orchestrator | 2026-03-17 00:02:32.987973 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-17 00:02:32.987977 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-17 00:02:32.987981 | orchestrator | + admin_state_up = (known after apply) 2026-03-17 00:02:32.987985 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-17 00:02:32.987988 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-17 00:02:32.987992 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.987996 | orchestrator | + device_id = (known after apply) 2026-03-17 00:02:32.987999 | orchestrator | + device_owner = (known after apply) 2026-03-17 00:02:32.988003 | orchestrator | + dns_assignment = (known after apply) 2026-03-17 00:02:32.988007 | orchestrator | + dns_name = (known after apply) 2026-03-17 00:02:32.988010 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.988014 | orchestrator | + mac_address = (known after apply) 2026-03-17 00:02:32.988018 | orchestrator | + network_id = (known after apply) 2026-03-17 00:02:32.988021 | orchestrator | + port_security_enabled = (known after apply) 2026-03-17 00:02:32.988025 | orchestrator | + qos_policy_id = (known after apply) 2026-03-17 00:02:32.988029 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.988033 | orchestrator | + security_group_ids = (known after apply) 2026-03-17 00:02:32.988036 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.988040 | orchestrator | 2026-03-17 00:02:32.988044 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.988048 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-17 00:02:32.988051 | orchestrator | } 2026-03-17 00:02:32.988055 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.988059 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-17 00:02:32.988062 | orchestrator | } 2026-03-17 00:02:32.988066 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.988070 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-17 00:02:32.988074 | orchestrator | } 2026-03-17 00:02:32.988077 | orchestrator | 2026-03-17 00:02:32.988084 | orchestrator | + binding (known after apply) 2026-03-17 00:02:32.988088 | orchestrator | 2026-03-17 00:02:32.988091 | orchestrator | + fixed_ip { 2026-03-17 00:02:32.988095 | orchestrator | + ip_address = "192.168.16.13" 2026-03-17 00:02:32.988099 | orchestrator | + subnet_id = (known after apply) 2026-03-17 00:02:32.988103 | orchestrator | } 2026-03-17 00:02:32.988106 | orchestrator | } 2026-03-17 00:02:32.988110 | orchestrator | 2026-03-17 00:02:32.988114 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-17 00:02:32.988117 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-17 00:02:32.988121 | orchestrator | + admin_state_up = (known after apply) 2026-03-17 00:02:32.988125 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-17 00:02:32.988129 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-17 00:02:32.988133 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.988136 | orchestrator | + device_id = (known after apply) 2026-03-17 00:02:32.988140 | orchestrator | + device_owner = (known after apply) 2026-03-17 00:02:32.988144 | orchestrator | + dns_assignment = (known after apply) 2026-03-17 00:02:32.988147 | orchestrator | + dns_name = (known after apply) 2026-03-17 00:02:32.988151 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.988155 | orchestrator | + mac_address = (known after apply) 2026-03-17 00:02:32.988158 | orchestrator | + network_id = (known after apply) 2026-03-17 00:02:32.988162 | orchestrator | + port_security_enabled = (known after apply) 2026-03-17 00:02:32.988166 | orchestrator | + qos_policy_id = (known after apply) 2026-03-17 00:02:32.988170 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.988173 | orchestrator | + security_group_ids = (known after apply) 2026-03-17 00:02:32.988177 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.988181 | orchestrator | 2026-03-17 00:02:32.988185 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.988189 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-17 00:02:32.988192 | orchestrator | } 2026-03-17 00:02:32.988196 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.988200 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-17 00:02:32.988204 | orchestrator | } 2026-03-17 00:02:32.988207 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.988211 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-17 00:02:32.988215 | orchestrator | } 2026-03-17 00:02:32.988219 | orchestrator | 2026-03-17 00:02:32.988222 | orchestrator | + binding (known after apply) 2026-03-17 00:02:32.988226 | orchestrator | 2026-03-17 00:02:32.988230 | orchestrator | + fixed_ip { 2026-03-17 00:02:32.988233 | orchestrator | + ip_address = "192.168.16.14" 2026-03-17 00:02:32.988237 | orchestrator | + subnet_id = (known after apply) 2026-03-17 00:02:32.988241 | orchestrator | } 2026-03-17 00:02:32.988245 | orchestrator | } 2026-03-17 00:02:32.988248 | orchestrator | 2026-03-17 00:02:32.988252 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-17 00:02:32.988256 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-17 00:02:32.988260 | orchestrator | + admin_state_up = (known after apply) 2026-03-17 00:02:32.988263 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-17 00:02:32.988267 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-17 00:02:32.988271 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.988275 | orchestrator | + device_id = (known after apply) 2026-03-17 00:02:32.988278 | orchestrator | + device_owner = (known after apply) 2026-03-17 00:02:32.988282 | orchestrator | + dns_assignment = (known after apply) 2026-03-17 00:02:32.988286 | orchestrator | + dns_name = (known after apply) 2026-03-17 00:02:32.988289 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.988293 | orchestrator | + mac_address = (known after apply) 2026-03-17 00:02:32.988297 | orchestrator | + network_id = (known after apply) 2026-03-17 00:02:32.988303 | orchestrator | + port_security_enabled = (known after apply) 2026-03-17 00:02:32.988306 | orchestrator | + qos_policy_id = (known after apply) 2026-03-17 00:02:32.988314 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.988317 | orchestrator | + security_group_ids = (known after apply) 2026-03-17 00:02:32.988321 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.988325 | orchestrator | 2026-03-17 00:02:32.988328 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.988332 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-17 00:02:32.988336 | orchestrator | } 2026-03-17 00:02:32.988340 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.988343 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-17 00:02:32.988347 | orchestrator | } 2026-03-17 00:02:32.988351 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.988355 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-17 00:02:32.988358 | orchestrator | } 2026-03-17 00:02:32.988362 | orchestrator | 2026-03-17 00:02:32.988368 | orchestrator | + binding (known after apply) 2026-03-17 00:02:32.988372 | orchestrator | 2026-03-17 00:02:32.988376 | orchestrator | + fixed_ip { 2026-03-17 00:02:32.988379 | orchestrator | + ip_address = "192.168.16.15" 2026-03-17 00:02:32.988383 | orchestrator | + subnet_id = (known after apply) 2026-03-17 00:02:32.988387 | orchestrator | } 2026-03-17 00:02:32.988390 | orchestrator | } 2026-03-17 00:02:32.988394 | orchestrator | 2026-03-17 00:02:32.988398 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-17 00:02:32.988402 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-17 00:02:32.988431 | orchestrator | + force_destroy = false 2026-03-17 00:02:32.988436 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.988439 | orchestrator | + port_id = (known after apply) 2026-03-17 00:02:32.988443 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.988447 | orchestrator | + router_id = (known after apply) 2026-03-17 00:02:32.988451 | orchestrator | + subnet_id = (known after apply) 2026-03-17 00:02:32.988454 | orchestrator | } 2026-03-17 00:02:32.988458 | orchestrator | 2026-03-17 00:02:32.988462 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-17 00:02:32.988465 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-17 00:02:32.988469 | orchestrator | + admin_state_up = (known after apply) 2026-03-17 00:02:32.988473 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.988476 | orchestrator | + availability_zone_hints = [ 2026-03-17 00:02:32.988480 | orchestrator | + "nova", 2026-03-17 00:02:32.988484 | orchestrator | ] 2026-03-17 00:02:32.988488 | orchestrator | + distributed = (known after apply) 2026-03-17 00:02:32.988491 | orchestrator | + enable_snat = (known after apply) 2026-03-17 00:02:32.988495 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-17 00:02:32.988499 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-17 00:02:32.988502 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.988506 | orchestrator | + name = "testbed" 2026-03-17 00:02:32.988510 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.988514 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.988517 | orchestrator | 2026-03-17 00:02:32.988521 | orchestrator | + external_fixed_ip (known after apply) 2026-03-17 00:02:32.988525 | orchestrator | } 2026-03-17 00:02:32.988529 | orchestrator | 2026-03-17 00:02:32.988532 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-17 00:02:32.988537 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-17 00:02:32.988540 | orchestrator | + description = "ssh" 2026-03-17 00:02:32.988544 | orchestrator | + direction = "ingress" 2026-03-17 00:02:32.988548 | orchestrator | + ethertype = "IPv4" 2026-03-17 00:02:32.988551 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.988555 | orchestrator | + port_range_max = 22 2026-03-17 00:02:32.988559 | orchestrator | + port_range_min = 22 2026-03-17 00:02:32.988563 | orchestrator | + protocol = "tcp" 2026-03-17 00:02:32.988566 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.988575 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-17 00:02:32.988579 | orchestrator | + remote_group_id = (known after apply) 2026-03-17 00:02:32.988583 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-17 00:02:32.988586 | orchestrator | + security_group_id = (known after apply) 2026-03-17 00:02:32.988590 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.988594 | orchestrator | } 2026-03-17 00:02:32.988597 | orchestrator | 2026-03-17 00:02:32.988601 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-17 00:02:32.988605 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-17 00:02:32.988609 | orchestrator | + description = "wireguard" 2026-03-17 00:02:32.988612 | orchestrator | + direction = "ingress" 2026-03-17 00:02:32.988616 | orchestrator | + ethertype = "IPv4" 2026-03-17 00:02:32.988620 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.988623 | orchestrator | + port_range_max = 51820 2026-03-17 00:02:32.988627 | orchestrator | + port_range_min = 51820 2026-03-17 00:02:32.988631 | orchestrator | + protocol = "udp" 2026-03-17 00:02:32.988635 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.988638 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-17 00:02:32.988642 | orchestrator | + remote_group_id = (known after apply) 2026-03-17 00:02:32.988646 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-17 00:02:32.988649 | orchestrator | + security_group_id = (known after apply) 2026-03-17 00:02:32.988653 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.988657 | orchestrator | } 2026-03-17 00:02:32.988661 | orchestrator | 2026-03-17 00:02:32.988664 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-17 00:02:32.988668 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-17 00:02:32.988672 | orchestrator | + direction = "ingress" 2026-03-17 00:02:32.988676 | orchestrator | + ethertype = "IPv4" 2026-03-17 00:02:32.988679 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.988683 | orchestrator | + protocol = "tcp" 2026-03-17 00:02:32.988687 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.988690 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-17 00:02:32.988694 | orchestrator | + remote_group_id = (known after apply) 2026-03-17 00:02:32.988698 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-17 00:02:32.988704 | orchestrator | + security_group_id = (known after apply) 2026-03-17 00:02:32.988708 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.988712 | orchestrator | } 2026-03-17 00:02:32.988715 | orchestrator | 2026-03-17 00:02:32.988719 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-17 00:02:32.988723 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-17 00:02:32.988727 | orchestrator | + direction = "ingress" 2026-03-17 00:02:32.988731 | orchestrator | + ethertype = "IPv4" 2026-03-17 00:02:32.988734 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.988738 | orchestrator | + protocol = "udp" 2026-03-17 00:02:32.988742 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.988745 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-17 00:02:32.988749 | orchestrator | + remote_group_id = (known after apply) 2026-03-17 00:02:32.988753 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-17 00:02:32.988757 | orchestrator | + security_group_id = (known after apply) 2026-03-17 00:02:32.988760 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.988764 | orchestrator | } 2026-03-17 00:02:32.988768 | orchestrator | 2026-03-17 00:02:32.988771 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-17 00:02:32.988790 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-17 00:02:32.988797 | orchestrator | + direction = "ingress" 2026-03-17 00:02:32.988804 | orchestrator | + ethertype = "IPv4" 2026-03-17 00:02:32.988811 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.988817 | orchestrator | + protocol = "icmp" 2026-03-17 00:02:32.988825 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.988829 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-17 00:02:32.988832 | orchestrator | + remote_group_id = (known after apply) 2026-03-17 00:02:32.988836 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-17 00:02:32.988840 | orchestrator | + security_group_id = (known after apply) 2026-03-17 00:02:32.988843 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.988847 | orchestrator | } 2026-03-17 00:02:32.988851 | orchestrator | 2026-03-17 00:02:32.988854 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-17 00:02:32.988858 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-17 00:02:32.988862 | orchestrator | + direction = "ingress" 2026-03-17 00:02:32.988866 | orchestrator | + ethertype = "IPv4" 2026-03-17 00:02:32.988869 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.988873 | orchestrator | + protocol = "tcp" 2026-03-17 00:02:32.988877 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.988880 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-17 00:02:32.988887 | orchestrator | + remote_group_id = (known after apply) 2026-03-17 00:02:32.988891 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-17 00:02:32.988894 | orchestrator | + security_group_id = (known after apply) 2026-03-17 00:02:32.988898 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.988901 | orchestrator | } 2026-03-17 00:02:32.988905 | orchestrator | 2026-03-17 00:02:32.988909 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-17 00:02:32.988912 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-17 00:02:32.988916 | orchestrator | + direction = "ingress" 2026-03-17 00:02:32.988920 | orchestrator | + ethertype = "IPv4" 2026-03-17 00:02:32.988923 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.988927 | orchestrator | + protocol = "udp" 2026-03-17 00:02:32.988931 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.988934 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-17 00:02:32.988938 | orchestrator | + remote_group_id = (known after apply) 2026-03-17 00:02:32.988942 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-17 00:02:32.988945 | orchestrator | + security_group_id = (known after apply) 2026-03-17 00:02:32.988949 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.988953 | orchestrator | } 2026-03-17 00:02:32.988956 | orchestrator | 2026-03-17 00:02:32.988960 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-17 00:02:32.988964 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-17 00:02:32.988967 | orchestrator | + direction = "ingress" 2026-03-17 00:02:32.988973 | orchestrator | + ethertype = "IPv4" 2026-03-17 00:02:32.988977 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.988981 | orchestrator | + protocol = "icmp" 2026-03-17 00:02:32.988984 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.988988 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-17 00:02:32.988991 | orchestrator | + remote_group_id = (known after apply) 2026-03-17 00:02:32.988995 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-17 00:02:32.988999 | orchestrator | + security_group_id = (known after apply) 2026-03-17 00:02:32.989002 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.989009 | orchestrator | } 2026-03-17 00:02:32.989013 | orchestrator | 2026-03-17 00:02:32.989016 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-17 00:02:32.989020 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-17 00:02:32.989024 | orchestrator | + description = "vrrp" 2026-03-17 00:02:32.989028 | orchestrator | + direction = "ingress" 2026-03-17 00:02:32.989031 | orchestrator | + ethertype = "IPv4" 2026-03-17 00:02:32.989035 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.989039 | orchestrator | + protocol = "112" 2026-03-17 00:02:32.989042 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.989046 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-17 00:02:32.989050 | orchestrator | + remote_group_id = (known after apply) 2026-03-17 00:02:32.989053 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-17 00:02:32.989057 | orchestrator | + security_group_id = (known after apply) 2026-03-17 00:02:32.989063 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.989067 | orchestrator | } 2026-03-17 00:02:32.989071 | orchestrator | 2026-03-17 00:02:32.989075 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-17 00:02:32.989079 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-17 00:02:32.989082 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.989086 | orchestrator | + description = "management security group" 2026-03-17 00:02:32.989090 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.989094 | orchestrator | + name = "testbed-management" 2026-03-17 00:02:32.989097 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.989101 | orchestrator | + stateful = (known after apply) 2026-03-17 00:02:32.989105 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.989108 | orchestrator | } 2026-03-17 00:02:32.989112 | orchestrator | 2026-03-17 00:02:32.989116 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-17 00:02:32.989119 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-17 00:02:32.989123 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.989127 | orchestrator | + description = "node security group" 2026-03-17 00:02:32.989131 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.989134 | orchestrator | + name = "testbed-node" 2026-03-17 00:02:32.989138 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.989142 | orchestrator | + stateful = (known after apply) 2026-03-17 00:02:32.989145 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.989149 | orchestrator | } 2026-03-17 00:02:32.989153 | orchestrator | 2026-03-17 00:02:32.989156 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-17 00:02:32.989160 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-17 00:02:32.989164 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.989167 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-17 00:02:32.989171 | orchestrator | + dns_nameservers = [ 2026-03-17 00:02:32.989175 | orchestrator | + "8.8.8.8", 2026-03-17 00:02:32.989179 | orchestrator | + "9.9.9.9", 2026-03-17 00:02:32.989183 | orchestrator | ] 2026-03-17 00:02:32.989186 | orchestrator | + enable_dhcp = true 2026-03-17 00:02:32.989190 | orchestrator | + gateway_ip = (known after apply) 2026-03-17 00:02:32.989194 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.989198 | orchestrator | + ip_version = 4 2026-03-17 00:02:32.989201 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-17 00:02:32.989205 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-17 00:02:32.989209 | orchestrator | + name = "subnet-testbed-management" 2026-03-17 00:02:32.989213 | orchestrator | + network_id = (known after apply) 2026-03-17 00:02:32.989216 | orchestrator | + no_gateway = false 2026-03-17 00:02:32.989220 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.989224 | orchestrator | + service_types = (known after apply) 2026-03-17 00:02:32.989231 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.989235 | orchestrator | 2026-03-17 00:02:32.989239 | orchestrator | + allocation_pool { 2026-03-17 00:02:32.989242 | orchestrator | + end = "192.168.31.250" 2026-03-17 00:02:32.989246 | orchestrator | + start = "192.168.31.200" 2026-03-17 00:02:32.989250 | orchestrator | } 2026-03-17 00:02:32.989253 | orchestrator | } 2026-03-17 00:02:32.989257 | orchestrator | 2026-03-17 00:02:32.989261 | orchestrator | # terraform_data.image will be created 2026-03-17 00:02:32.989264 | orchestrator | + resource "terraform_data" "image" { 2026-03-17 00:02:32.989268 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.989272 | orchestrator | + input = "Ubuntu 24.04" 2026-03-17 00:02:32.989276 | orchestrator | + output = (known after apply) 2026-03-17 00:02:32.989279 | orchestrator | } 2026-03-17 00:02:32.989283 | orchestrator | 2026-03-17 00:02:32.989287 | orchestrator | # terraform_data.image_node will be created 2026-03-17 00:02:32.989290 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-17 00:02:32.989294 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.989298 | orchestrator | + input = "Ubuntu 24.04" 2026-03-17 00:02:32.989301 | orchestrator | + output = (known after apply) 2026-03-17 00:02:32.989305 | orchestrator | } 2026-03-17 00:02:32.989309 | orchestrator | 2026-03-17 00:02:32.989312 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-17 00:02:32.989316 | orchestrator | 2026-03-17 00:02:32.989320 | orchestrator | Changes to Outputs: 2026-03-17 00:02:32.989324 | orchestrator | + manager_address = (sensitive value) 2026-03-17 00:02:32.989327 | orchestrator | + private_key = (sensitive value) 2026-03-17 00:02:33.158594 | orchestrator | terraform_data.image: Creating... 2026-03-17 00:02:33.205946 | orchestrator | terraform_data.image: Creation complete after 0s [id=6bc17799-3435-5bdb-60f3-7c97a58f0da9] 2026-03-17 00:02:33.206110 | orchestrator | terraform_data.image_node: Creating... 2026-03-17 00:02:33.206555 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=2249656f-3498-77b8-10c8-e8aaa481a85f] 2026-03-17 00:02:33.212285 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-17 00:02:33.212412 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-17 00:02:33.217237 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-17 00:02:33.217412 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-17 00:02:33.219864 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-17 00:02:33.223707 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-17 00:02:33.228126 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-17 00:02:33.228161 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-17 00:02:33.228267 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-17 00:02:33.239576 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-17 00:02:33.673116 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-17 00:02:34.037645 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-17 00:02:34.037712 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-17 00:02:34.037727 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-17 00:02:34.037740 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-03-17 00:02:34.037752 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-17 00:02:34.396180 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=6c65e153-2e2b-4365-ba5a-bbc2048e4319] 2026-03-17 00:02:34.406706 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-17 00:02:36.962066 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=2854fd14-3e82-4dcb-865e-ef6e028a2c86] 2026-03-17 00:02:36.966573 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=dd7becb9-0584-4efc-8944-d51272ed61fa] 2026-03-17 00:02:36.966630 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-17 00:02:36.972244 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-17 00:02:37.013098 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=f91ef76e-9f0f-49ef-bc09-7b70daad6579] 2026-03-17 00:02:37.021410 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-17 00:02:37.025115 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=a7deaf5a-cd70-43cd-92ab-ee3441c5e54f] 2026-03-17 00:02:37.033449 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=e46b8678-1baa-4ba8-a612-904460f97320] 2026-03-17 00:02:37.036190 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-17 00:02:37.040550 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-17 00:02:37.048498 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=d8ebe49d-b73b-4490-897b-f13bdc67f86d] 2026-03-17 00:02:37.049773 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=0a90ba68-315a-4ce4-a803-8ffceb4dacc1] 2026-03-17 00:02:37.056345 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-17 00:02:37.063575 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-17 00:02:37.068129 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=0893ee4c2574a7476845cae8b747eb4ae40fae48] 2026-03-17 00:02:37.076222 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-17 00:02:37.078396 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=f95d5766-a3db-4d15-9977-785c02a190f5] 2026-03-17 00:02:37.079892 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=35918b1800c63fd12c2d6b92790ecc15c62ff7d0] 2026-03-17 00:02:37.084386 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-17 00:02:37.213832 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=9ec754d5-296d-4a8a-b6d8-e4830272a171] 2026-03-17 00:02:37.762988 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=b1d77269-ad7c-4f8a-934d-5b47c43e3d9f] 2026-03-17 00:02:37.977773 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=a6c14751-9996-47ee-b540-ee17edfa5c08] 2026-03-17 00:02:37.985026 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-17 00:02:40.439881 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=03bf2729-822f-4d31-8b12-53ff53864903] 2026-03-17 00:02:40.442806 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=f5a0ad70-e1a1-4fe9-af13-e0556c4f61c9] 2026-03-17 00:02:40.462740 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=1121225f-1607-435d-bcbb-f933b6d22b35] 2026-03-17 00:02:40.508237 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=3189c099-cba2-49c7-8cd7-9afaa3b71213] 2026-03-17 00:02:40.525348 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=59054c1a-2b5d-4689-a437-1a1bb7be34e5] 2026-03-17 00:02:40.532871 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=15a4589a-55c0-4383-a3c8-a64ced338069] 2026-03-17 00:02:41.451528 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=0259a099-aac5-4805-889b-951a956a4679] 2026-03-17 00:02:41.463519 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-17 00:02:41.464396 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-17 00:02:41.464858 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-17 00:02:41.733441 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=9e187f81-d688-4f78-945e-cb468bc6c5a6] 2026-03-17 00:02:41.740527 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=eb24f35e-bee0-4435-963c-1038584b2563] 2026-03-17 00:02:41.744584 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-17 00:02:41.746007 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-17 00:02:41.746394 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-17 00:02:41.747297 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-17 00:02:41.747872 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-17 00:02:41.750535 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-17 00:02:41.754361 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-17 00:02:41.754566 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-17 00:02:41.768196 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-17 00:02:41.956323 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=1a121208-d9fa-4aeb-b5d2-8487e22d52d8] 2026-03-17 00:02:41.969596 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-17 00:02:42.289665 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=d4ac09d8-31d2-4c68-974a-12f396a27abb] 2026-03-17 00:02:42.303766 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-17 00:02:42.696745 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=269e2bfc-a97c-4f59-9884-42206f91305d] 2026-03-17 00:02:42.710172 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-17 00:02:42.798966 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=7fddc1df-c69a-454f-8291-5c2bbccd29e6] 2026-03-17 00:02:42.811073 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-17 00:02:42.961836 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=9704e9c1-d114-4c7c-9302-58ffdcb069df] 2026-03-17 00:02:42.967152 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-17 00:02:43.068114 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=e5fffc2f-aa71-4530-a7b4-c31f23a9b77c] 2026-03-17 00:02:43.070943 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-17 00:02:43.214431 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=d51a6e7f-1f1e-4372-9df7-6b7700f5db01] 2026-03-17 00:02:43.220824 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-17 00:02:43.424191 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=a08c8cab-99ba-412f-980e-1129c6a087f2] 2026-03-17 00:02:43.584726 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 2s [id=2625b5e3-ffd1-484b-8d4f-a5f8bc1dede9] 2026-03-17 00:02:43.631567 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 2s [id=4d10fad2-8948-4e54-bbc0-aa471bd27155] 2026-03-17 00:02:44.102598 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=4628ee69-8a26-4412-9392-396586b2c13a] 2026-03-17 00:02:44.389972 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=475bb383-76e6-4125-9220-192d6c383195] 2026-03-17 00:02:44.904192 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 2s [id=b27de01c-fe8d-4672-9b2c-52429c1c2be6] 2026-03-17 00:02:44.984331 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=72322ef7-c84a-4722-bbdd-e531b80ef7ca] 2026-03-17 00:02:44.991053 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-17 00:02:45.069368 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 2s [id=41b054fa-6cc8-491d-894e-33c5604eec55] 2026-03-17 00:02:45.646757 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 4s [id=b66ec9d8-576c-48cf-a1ab-ccd4c9f9b4f1] 2026-03-17 00:02:45.680688 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-17 00:02:45.681045 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-17 00:02:45.682520 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-17 00:02:45.685203 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-17 00:02:45.685917 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-17 00:02:45.699961 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-17 00:02:46.085175 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 3s [id=0d3979f2-f3d4-4c52-9964-d66c4115b977] 2026-03-17 00:02:48.513157 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 4s [id=8af36621-2738-4586-a8c0-89dd82b65b17] 2026-03-17 00:02:48.519491 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-17 00:02:48.522601 | orchestrator | local_file.inventory: Creating... 2026-03-17 00:02:48.529512 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-17 00:02:48.877203 | orchestrator | local_file.inventory: Creation complete after 0s [id=bcd9ff6eac8513a4aabd0e74aed9221079f5bedb] 2026-03-17 00:02:48.877508 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=593fd305d62d7cef14a29c23250522939f164850] 2026-03-17 00:02:50.075950 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=8af36621-2738-4586-a8c0-89dd82b65b17] 2026-03-17 00:02:55.688909 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-17 00:02:55.689121 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-17 00:02:55.689240 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-17 00:02:55.689257 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-17 00:02:55.689268 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-17 00:02:55.700713 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-17 00:03:05.697932 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-17 00:03:05.698234 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-17 00:03:05.698309 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-17 00:03:05.698350 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-17 00:03:05.698394 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-17 00:03:05.701511 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-17 00:03:15.707515 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-03-17 00:03:15.707636 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-03-17 00:03:15.707681 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-03-17 00:03:15.707719 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-03-17 00:03:15.707737 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-03-17 00:03:15.707752 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-03-17 00:03:16.725536 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=0298146c-2591-4ed6-bb2d-d6f237587378] 2026-03-17 00:03:16.746314 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=82d3bbd3-5143-4632-ad31-316aa990c8c5] 2026-03-17 00:03:25.708456 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-03-17 00:03:25.708577 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-03-17 00:03:25.708602 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-03-17 00:03:25.708692 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-03-17 00:03:26.602100 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 41s [id=da2ef2bb-2315-4b80-a7f4-9ca251c9f780] 2026-03-17 00:03:26.712669 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 41s [id=668d9488-678b-4e40-af35-e9cd6769c1a3] 2026-03-17 00:03:35.717332 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [50s elapsed] 2026-03-17 00:03:35.717439 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [50s elapsed] 2026-03-17 00:03:36.577290 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 51s [id=21e91f97-36ab-413e-b7e7-1aeddcdabfca] 2026-03-17 00:03:37.332204 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 51s [id=57a39861-0b56-407d-a5bf-9439022712c2] 2026-03-17 00:03:37.357554 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-17 00:03:37.359619 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-17 00:03:37.361504 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-17 00:03:37.370215 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-17 00:03:37.375601 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=5683300038413001079] 2026-03-17 00:03:37.380333 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-17 00:03:37.381506 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-17 00:03:37.383979 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-17 00:03:37.392724 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-17 00:03:37.393142 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-17 00:03:37.404474 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-17 00:03:37.417173 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-17 00:03:46.837726 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=668d9488-678b-4e40-af35-e9cd6769c1a3/0a90ba68-315a-4ce4-a803-8ffceb4dacc1] 2026-03-17 00:03:46.841203 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=da2ef2bb-2315-4b80-a7f4-9ca251c9f780/f91ef76e-9f0f-49ef-bc09-7b70daad6579] 2026-03-17 00:03:46.916230 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=82d3bbd3-5143-4632-ad31-316aa990c8c5/2854fd14-3e82-4dcb-865e-ef6e028a2c86] 2026-03-17 00:03:46.921418 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=668d9488-678b-4e40-af35-e9cd6769c1a3/dd7becb9-0584-4efc-8944-d51272ed61fa] 2026-03-17 00:03:47.003552 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=82d3bbd3-5143-4632-ad31-316aa990c8c5/f95d5766-a3db-4d15-9977-785c02a190f5] 2026-03-17 00:03:47.038498 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=da2ef2bb-2315-4b80-a7f4-9ca251c9f780/d8ebe49d-b73b-4490-897b-f13bdc67f86d] 2026-03-17 00:03:47.046836 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=668d9488-678b-4e40-af35-e9cd6769c1a3/a7deaf5a-cd70-43cd-92ab-ee3441c5e54f] 2026-03-17 00:03:47.084874 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=82d3bbd3-5143-4632-ad31-316aa990c8c5/e46b8678-1baa-4ba8-a612-904460f97320] 2026-03-17 00:03:47.269672 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=da2ef2bb-2315-4b80-a7f4-9ca251c9f780/9ec754d5-296d-4a8a-b6d8-e4830272a171] 2026-03-17 00:03:47.422104 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-17 00:03:57.423050 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-17 00:03:57.860215 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=c4562477-3c70-4022-aa4f-71b2809ebac2] 2026-03-17 00:03:57.877165 | orchestrator | 2026-03-17 00:03:57.877222 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-17 00:03:57.877254 | orchestrator | 2026-03-17 00:03:57.877261 | orchestrator | Outputs: 2026-03-17 00:03:57.877265 | orchestrator | 2026-03-17 00:03:57.877282 | orchestrator | manager_address = 2026-03-17 00:03:57.877287 | orchestrator | private_key = 2026-03-17 00:03:58.104347 | orchestrator | ok: Runtime: 0:01:31.734890 2026-03-17 00:03:58.147699 | 2026-03-17 00:03:58.147954 | TASK [Fetch manager address] 2026-03-17 00:03:58.626662 | orchestrator | ok 2026-03-17 00:03:58.637284 | 2026-03-17 00:03:58.637420 | TASK [Set manager_host address] 2026-03-17 00:03:58.718721 | orchestrator | ok 2026-03-17 00:03:58.728633 | 2026-03-17 00:03:58.728773 | LOOP [Update ansible collections] 2026-03-17 00:04:00.102501 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-17 00:04:00.102987 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-17 00:04:00.103065 | orchestrator | Starting galaxy collection install process 2026-03-17 00:04:00.103131 | orchestrator | Process install dependency map 2026-03-17 00:04:00.103173 | orchestrator | Starting collection install process 2026-03-17 00:04:00.103210 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons' 2026-03-17 00:04:00.103256 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons 2026-03-17 00:04:00.103311 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-17 00:04:00.103392 | orchestrator | ok: Item: commons Runtime: 0:00:00.928332 2026-03-17 00:04:01.386525 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-17 00:04:01.386740 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-17 00:04:01.386821 | orchestrator | Starting galaxy collection install process 2026-03-17 00:04:01.386886 | orchestrator | Process install dependency map 2026-03-17 00:04:01.386927 | orchestrator | Starting collection install process 2026-03-17 00:04:01.386963 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services' 2026-03-17 00:04:01.387000 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services 2026-03-17 00:04:01.387035 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-17 00:04:01.387090 | orchestrator | ok: Item: services Runtime: 0:00:00.991309 2026-03-17 00:04:01.405151 | 2026-03-17 00:04:01.405314 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-17 00:04:12.887346 | orchestrator | ok 2026-03-17 00:04:12.899229 | 2026-03-17 00:04:12.899356 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-17 00:05:12.950418 | orchestrator | ok 2026-03-17 00:05:12.967402 | 2026-03-17 00:05:12.967630 | TASK [Fetch manager ssh hostkey] 2026-03-17 00:05:14.569234 | orchestrator | Output suppressed because no_log was given 2026-03-17 00:05:14.586649 | 2026-03-17 00:05:14.586963 | TASK [Get ssh keypair from terraform environment] 2026-03-17 00:05:15.125340 | orchestrator | ok: Runtime: 0:00:00.008272 2026-03-17 00:05:15.141774 | 2026-03-17 00:05:15.141931 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-17 00:05:15.187000 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-17 00:05:15.201715 | 2026-03-17 00:05:15.201852 | TASK [Run manager part 0] 2026-03-17 00:05:16.183759 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-17 00:05:16.230900 | orchestrator | 2026-03-17 00:05:16.230939 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-17 00:05:16.230946 | orchestrator | 2026-03-17 00:05:16.230959 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-17 00:05:17.968723 | orchestrator | ok: [testbed-manager] 2026-03-17 00:05:17.968786 | orchestrator | 2026-03-17 00:05:17.968932 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-17 00:05:17.968987 | orchestrator | 2026-03-17 00:05:17.969068 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-17 00:05:19.818388 | orchestrator | ok: [testbed-manager] 2026-03-17 00:05:19.818430 | orchestrator | 2026-03-17 00:05:19.818439 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-17 00:05:20.418469 | orchestrator | ok: [testbed-manager] 2026-03-17 00:05:20.418612 | orchestrator | 2026-03-17 00:05:20.418632 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-17 00:05:20.457105 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:05:20.457158 | orchestrator | 2026-03-17 00:05:20.457176 | orchestrator | TASK [Update package cache] **************************************************** 2026-03-17 00:05:20.480573 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:05:20.480604 | orchestrator | 2026-03-17 00:05:20.480609 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-17 00:05:20.505120 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:05:20.505157 | orchestrator | 2026-03-17 00:05:20.505162 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-17 00:05:20.532618 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:05:20.532653 | orchestrator | 2026-03-17 00:05:20.532659 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-17 00:05:20.556969 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:05:20.557001 | orchestrator | 2026-03-17 00:05:20.557008 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-17 00:05:20.581622 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:05:20.581655 | orchestrator | 2026-03-17 00:05:20.581662 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-17 00:05:20.606295 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:05:20.606338 | orchestrator | 2026-03-17 00:05:20.606348 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-17 00:05:21.256421 | orchestrator | changed: [testbed-manager] 2026-03-17 00:05:21.256470 | orchestrator | 2026-03-17 00:05:21.256481 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-17 00:08:08.394713 | orchestrator | changed: [testbed-manager] 2026-03-17 00:08:08.396715 | orchestrator | 2026-03-17 00:08:08.396746 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-17 00:10:01.882650 | orchestrator | changed: [testbed-manager] 2026-03-17 00:10:01.882708 | orchestrator | 2026-03-17 00:10:01.882716 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-17 00:10:22.436005 | orchestrator | changed: [testbed-manager] 2026-03-17 00:10:22.436067 | orchestrator | 2026-03-17 00:10:22.436083 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-17 00:10:32.240455 | orchestrator | changed: [testbed-manager] 2026-03-17 00:10:32.240542 | orchestrator | 2026-03-17 00:10:32.240557 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-17 00:10:32.285396 | orchestrator | ok: [testbed-manager] 2026-03-17 00:10:32.285460 | orchestrator | 2026-03-17 00:10:32.285471 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-17 00:10:33.102246 | orchestrator | ok: [testbed-manager] 2026-03-17 00:10:33.102286 | orchestrator | 2026-03-17 00:10:33.102296 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-17 00:10:33.828307 | orchestrator | changed: [testbed-manager] 2026-03-17 00:10:33.828412 | orchestrator | 2026-03-17 00:10:33.828443 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-17 00:10:40.698260 | orchestrator | changed: [testbed-manager] 2026-03-17 00:10:40.698359 | orchestrator | 2026-03-17 00:10:40.698400 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-17 00:10:46.489546 | orchestrator | changed: [testbed-manager] 2026-03-17 00:10:46.489680 | orchestrator | 2026-03-17 00:10:46.489699 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-17 00:10:50.110274 | orchestrator | changed: [testbed-manager] 2026-03-17 00:10:50.110377 | orchestrator | 2026-03-17 00:10:50.110394 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-17 00:10:52.247136 | orchestrator | changed: [testbed-manager] 2026-03-17 00:10:52.247191 | orchestrator | 2026-03-17 00:10:52.247200 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-17 00:10:53.306140 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-17 00:10:53.306198 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-17 00:10:53.306209 | orchestrator | 2026-03-17 00:10:53.306220 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-17 00:10:53.351215 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-17 00:10:53.351279 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-17 00:10:53.351290 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-17 00:10:53.351297 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-17 00:11:01.896838 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-17 00:11:01.897156 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-17 00:11:01.897199 | orchestrator | 2026-03-17 00:11:01.897213 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-17 00:11:02.453762 | orchestrator | changed: [testbed-manager] 2026-03-17 00:11:02.453847 | orchestrator | 2026-03-17 00:11:02.453864 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-17 00:15:23.507573 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-17 00:15:23.507620 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-17 00:15:23.507629 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-17 00:15:23.507635 | orchestrator | 2026-03-17 00:15:23.507642 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-17 00:15:25.851283 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-17 00:15:25.851369 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-17 00:15:25.851383 | orchestrator | 2026-03-17 00:15:25.851395 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-17 00:15:25.851407 | orchestrator | 2026-03-17 00:15:25.851419 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-17 00:15:27.203786 | orchestrator | ok: [testbed-manager] 2026-03-17 00:15:27.203877 | orchestrator | 2026-03-17 00:15:27.203896 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-17 00:15:27.253617 | orchestrator | ok: [testbed-manager] 2026-03-17 00:15:27.253684 | orchestrator | 2026-03-17 00:15:27.253697 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-17 00:15:27.339593 | orchestrator | ok: [testbed-manager] 2026-03-17 00:15:27.339662 | orchestrator | 2026-03-17 00:15:27.339669 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-17 00:15:28.129827 | orchestrator | changed: [testbed-manager] 2026-03-17 00:15:28.129918 | orchestrator | 2026-03-17 00:15:28.129935 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-17 00:15:28.846395 | orchestrator | changed: [testbed-manager] 2026-03-17 00:15:28.846471 | orchestrator | 2026-03-17 00:15:28.846484 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-17 00:15:30.167519 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-17 00:15:30.278464 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-17 00:15:30.278530 | orchestrator | 2026-03-17 00:15:30.278564 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-17 00:15:31.568732 | orchestrator | changed: [testbed-manager] 2026-03-17 00:15:31.568788 | orchestrator | 2026-03-17 00:15:31.568797 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-17 00:15:33.261946 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-17 00:15:33.262047 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-17 00:15:33.262061 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-17 00:15:33.262070 | orchestrator | 2026-03-17 00:15:33.262081 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-17 00:15:33.318254 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:15:33.318317 | orchestrator | 2026-03-17 00:15:33.318327 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-17 00:15:33.390285 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:15:33.390331 | orchestrator | 2026-03-17 00:15:33.390340 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-17 00:15:33.927647 | orchestrator | changed: [testbed-manager] 2026-03-17 00:15:33.927734 | orchestrator | 2026-03-17 00:15:33.927750 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-17 00:15:34.008510 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:15:34.008593 | orchestrator | 2026-03-17 00:15:34.008610 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-17 00:15:34.908262 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-17 00:15:34.908495 | orchestrator | changed: [testbed-manager] 2026-03-17 00:15:34.908517 | orchestrator | 2026-03-17 00:15:34.908530 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-17 00:15:34.945875 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:15:34.945936 | orchestrator | 2026-03-17 00:15:34.945947 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-17 00:15:34.987155 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:15:34.987275 | orchestrator | 2026-03-17 00:15:34.987292 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-17 00:15:35.023505 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:15:35.023589 | orchestrator | 2026-03-17 00:15:35.023614 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-17 00:15:35.098142 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:15:35.098276 | orchestrator | 2026-03-17 00:15:35.098297 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-17 00:15:35.861098 | orchestrator | ok: [testbed-manager] 2026-03-17 00:15:35.861203 | orchestrator | 2026-03-17 00:15:35.861255 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-17 00:15:35.861272 | orchestrator | 2026-03-17 00:15:35.861283 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-17 00:15:37.217176 | orchestrator | ok: [testbed-manager] 2026-03-17 00:15:37.217220 | orchestrator | 2026-03-17 00:15:37.217243 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-17 00:15:38.200904 | orchestrator | changed: [testbed-manager] 2026-03-17 00:15:38.200989 | orchestrator | 2026-03-17 00:15:38.201006 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:15:38.201019 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-03-17 00:15:38.201030 | orchestrator | 2026-03-17 00:15:38.605880 | orchestrator | ok: Runtime: 0:10:22.733373 2026-03-17 00:15:38.622105 | 2026-03-17 00:15:38.622236 | TASK [Point out that the log in on the manager is now possible] 2026-03-17 00:15:38.670436 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-17 00:15:38.680943 | 2026-03-17 00:15:38.681074 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-17 00:15:38.726318 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-17 00:15:38.736849 | 2026-03-17 00:15:38.737000 | TASK [Run manager part 1 + 2] 2026-03-17 00:15:40.104706 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-17 00:15:40.162032 | orchestrator | 2026-03-17 00:15:40.162083 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-17 00:15:40.162091 | orchestrator | 2026-03-17 00:15:40.162103 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-17 00:15:43.132146 | orchestrator | ok: [testbed-manager] 2026-03-17 00:15:43.132269 | orchestrator | 2026-03-17 00:15:43.132444 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-17 00:15:43.168942 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:15:43.169038 | orchestrator | 2026-03-17 00:15:43.169201 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-17 00:15:43.213917 | orchestrator | ok: [testbed-manager] 2026-03-17 00:15:43.213987 | orchestrator | 2026-03-17 00:15:43.214001 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-17 00:15:43.250399 | orchestrator | ok: [testbed-manager] 2026-03-17 00:15:43.250474 | orchestrator | 2026-03-17 00:15:43.250491 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-17 00:15:43.323875 | orchestrator | ok: [testbed-manager] 2026-03-17 00:15:43.323968 | orchestrator | 2026-03-17 00:15:43.323987 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-17 00:15:43.401642 | orchestrator | ok: [testbed-manager] 2026-03-17 00:15:43.401725 | orchestrator | 2026-03-17 00:15:43.401739 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-17 00:15:43.445342 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-17 00:15:43.445586 | orchestrator | 2026-03-17 00:15:43.445609 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-17 00:15:44.160246 | orchestrator | ok: [testbed-manager] 2026-03-17 00:15:44.160333 | orchestrator | 2026-03-17 00:15:44.160354 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-17 00:15:44.204923 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:15:44.205009 | orchestrator | 2026-03-17 00:15:44.205025 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-17 00:15:45.566310 | orchestrator | changed: [testbed-manager] 2026-03-17 00:15:45.566361 | orchestrator | 2026-03-17 00:15:45.566368 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-17 00:15:46.137720 | orchestrator | ok: [testbed-manager] 2026-03-17 00:15:46.137773 | orchestrator | 2026-03-17 00:15:46.137781 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-17 00:15:47.262361 | orchestrator | changed: [testbed-manager] 2026-03-17 00:15:47.262398 | orchestrator | 2026-03-17 00:15:47.262407 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-17 00:16:02.268397 | orchestrator | changed: [testbed-manager] 2026-03-17 00:16:02.268452 | orchestrator | 2026-03-17 00:16:02.268459 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-17 00:16:03.045330 | orchestrator | ok: [testbed-manager] 2026-03-17 00:16:03.045424 | orchestrator | 2026-03-17 00:16:03.045441 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-17 00:16:03.099071 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:16:03.099159 | orchestrator | 2026-03-17 00:16:03.099176 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-17 00:16:04.089717 | orchestrator | changed: [testbed-manager] 2026-03-17 00:16:04.089807 | orchestrator | 2026-03-17 00:16:04.089825 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-17 00:16:05.077043 | orchestrator | changed: [testbed-manager] 2026-03-17 00:16:05.077128 | orchestrator | 2026-03-17 00:16:05.077143 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-17 00:16:05.622426 | orchestrator | changed: [testbed-manager] 2026-03-17 00:16:05.622509 | orchestrator | 2026-03-17 00:16:05.622527 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-17 00:16:05.658167 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-17 00:16:05.658349 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-17 00:16:05.658376 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-17 00:16:05.658389 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-17 00:16:08.122125 | orchestrator | changed: [testbed-manager] 2026-03-17 00:16:08.122220 | orchestrator | 2026-03-17 00:16:08.122233 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-17 00:16:16.996629 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-17 00:16:16.996728 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-17 00:16:16.996747 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-17 00:16:16.996762 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-17 00:16:16.996783 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-17 00:16:16.996796 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-17 00:16:16.996809 | orchestrator | 2026-03-17 00:16:16.996822 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-17 00:16:18.007688 | orchestrator | changed: [testbed-manager] 2026-03-17 00:16:18.007725 | orchestrator | 2026-03-17 00:16:18.007732 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-03-17 00:16:18.046358 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:16:18.046396 | orchestrator | 2026-03-17 00:16:18.046402 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-17 00:16:21.150742 | orchestrator | changed: [testbed-manager] 2026-03-17 00:16:21.150845 | orchestrator | 2026-03-17 00:16:21.150872 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-17 00:16:21.194912 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:16:21.195015 | orchestrator | 2026-03-17 00:16:21.195041 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-17 00:17:55.397965 | orchestrator | changed: [testbed-manager] 2026-03-17 00:17:55.398007 | orchestrator | 2026-03-17 00:17:55.398054 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-17 00:17:56.508408 | orchestrator | ok: [testbed-manager] 2026-03-17 00:17:56.508500 | orchestrator | 2026-03-17 00:17:56.508515 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:17:56.508526 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-03-17 00:17:56.508535 | orchestrator | 2026-03-17 00:17:56.912206 | orchestrator | ok: Runtime: 0:02:17.568617 2026-03-17 00:17:56.931300 | 2026-03-17 00:17:56.931457 | TASK [Reboot manager] 2026-03-17 00:17:58.473246 | orchestrator | ok: Runtime: 0:00:00.936882 2026-03-17 00:17:58.486887 | 2026-03-17 00:17:58.487049 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-17 00:18:12.236022 | orchestrator | ok 2026-03-17 00:18:12.247402 | 2026-03-17 00:18:12.247555 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-17 00:19:12.285146 | orchestrator | ok 2026-03-17 00:19:12.295481 | 2026-03-17 00:19:12.295620 | TASK [Deploy manager + bootstrap nodes] 2026-03-17 00:19:14.804645 | orchestrator | 2026-03-17 00:19:14.804870 | orchestrator | # DEPLOY MANAGER 2026-03-17 00:19:14.804910 | orchestrator | 2026-03-17 00:19:14.804934 | orchestrator | + set -e 2026-03-17 00:19:14.804957 | orchestrator | + echo 2026-03-17 00:19:14.804981 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-17 00:19:14.805008 | orchestrator | + echo 2026-03-17 00:19:14.805065 | orchestrator | + cat /opt/manager-vars.sh 2026-03-17 00:19:14.807774 | orchestrator | export NUMBER_OF_NODES=6 2026-03-17 00:19:14.807807 | orchestrator | 2026-03-17 00:19:14.807820 | orchestrator | export CEPH_VERSION=reef 2026-03-17 00:19:14.807834 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-17 00:19:14.807846 | orchestrator | export MANAGER_VERSION=9.5.0 2026-03-17 00:19:14.807869 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-03-17 00:19:14.807880 | orchestrator | 2026-03-17 00:19:14.807898 | orchestrator | export ARA=false 2026-03-17 00:19:14.807910 | orchestrator | export DEPLOY_MODE=manager 2026-03-17 00:19:14.807928 | orchestrator | export TEMPEST=true 2026-03-17 00:19:14.807939 | orchestrator | export IS_ZUUL=true 2026-03-17 00:19:14.807950 | orchestrator | 2026-03-17 00:19:14.807968 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.64 2026-03-17 00:19:14.807980 | orchestrator | export EXTERNAL_API=false 2026-03-17 00:19:14.807991 | orchestrator | 2026-03-17 00:19:14.808002 | orchestrator | export IMAGE_USER=ubuntu 2026-03-17 00:19:14.808016 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-17 00:19:14.808027 | orchestrator | 2026-03-17 00:19:14.808038 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-17 00:19:14.808049 | orchestrator | 2026-03-17 00:19:14.808060 | orchestrator | + echo 2026-03-17 00:19:14.808072 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-17 00:19:14.808714 | orchestrator | ++ export INTERACTIVE=false 2026-03-17 00:19:14.808737 | orchestrator | ++ INTERACTIVE=false 2026-03-17 00:19:14.808756 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-17 00:19:14.808768 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-17 00:19:14.808832 | orchestrator | + source /opt/manager-vars.sh 2026-03-17 00:19:14.809039 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-17 00:19:14.809059 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-17 00:19:14.809076 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-17 00:19:14.809087 | orchestrator | ++ CEPH_VERSION=reef 2026-03-17 00:19:14.809098 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-17 00:19:14.809109 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-17 00:19:14.809120 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-17 00:19:14.809131 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-17 00:19:14.809142 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-17 00:19:14.809193 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-17 00:19:14.809205 | orchestrator | ++ export ARA=false 2026-03-17 00:19:14.809216 | orchestrator | ++ ARA=false 2026-03-17 00:19:14.809227 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-17 00:19:14.809238 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-17 00:19:14.809355 | orchestrator | ++ export TEMPEST=true 2026-03-17 00:19:14.809371 | orchestrator | ++ TEMPEST=true 2026-03-17 00:19:14.809382 | orchestrator | ++ export IS_ZUUL=true 2026-03-17 00:19:14.809393 | orchestrator | ++ IS_ZUUL=true 2026-03-17 00:19:14.809404 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.64 2026-03-17 00:19:14.809415 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.64 2026-03-17 00:19:14.809425 | orchestrator | ++ export EXTERNAL_API=false 2026-03-17 00:19:14.809436 | orchestrator | ++ EXTERNAL_API=false 2026-03-17 00:19:14.809447 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-17 00:19:14.809458 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-17 00:19:14.809469 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-17 00:19:14.809480 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-17 00:19:14.809491 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-17 00:19:14.809502 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-17 00:19:14.809513 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-17 00:19:14.861438 | orchestrator | + docker version 2026-03-17 00:19:14.990626 | orchestrator | Client: Docker Engine - Community 2026-03-17 00:19:14.990719 | orchestrator | Version: 27.5.1 2026-03-17 00:19:14.990739 | orchestrator | API version: 1.47 2026-03-17 00:19:14.990756 | orchestrator | Go version: go1.22.11 2026-03-17 00:19:14.990770 | orchestrator | Git commit: 9f9e405 2026-03-17 00:19:14.990785 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-17 00:19:14.990800 | orchestrator | OS/Arch: linux/amd64 2026-03-17 00:19:14.990814 | orchestrator | Context: default 2026-03-17 00:19:14.990829 | orchestrator | 2026-03-17 00:19:14.990844 | orchestrator | Server: Docker Engine - Community 2026-03-17 00:19:14.990859 | orchestrator | Engine: 2026-03-17 00:19:14.990875 | orchestrator | Version: 27.5.1 2026-03-17 00:19:14.990890 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-17 00:19:14.990941 | orchestrator | Go version: go1.22.11 2026-03-17 00:19:14.990957 | orchestrator | Git commit: 4c9b3b0 2026-03-17 00:19:14.990971 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-17 00:19:14.990980 | orchestrator | OS/Arch: linux/amd64 2026-03-17 00:19:14.990988 | orchestrator | Experimental: false 2026-03-17 00:19:14.990997 | orchestrator | containerd: 2026-03-17 00:19:14.991006 | orchestrator | Version: v2.2.2 2026-03-17 00:19:14.991015 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-03-17 00:19:14.991025 | orchestrator | runc: 2026-03-17 00:19:14.991034 | orchestrator | Version: 1.3.4 2026-03-17 00:19:14.991042 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-17 00:19:14.991051 | orchestrator | docker-init: 2026-03-17 00:19:14.991060 | orchestrator | Version: 0.19.0 2026-03-17 00:19:14.991069 | orchestrator | GitCommit: de40ad0 2026-03-17 00:19:14.993182 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-17 00:19:15.003322 | orchestrator | + set -e 2026-03-17 00:19:15.004621 | orchestrator | + source /opt/manager-vars.sh 2026-03-17 00:19:15.004660 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-17 00:19:15.004673 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-17 00:19:15.004684 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-17 00:19:15.004695 | orchestrator | ++ CEPH_VERSION=reef 2026-03-17 00:19:15.004706 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-17 00:19:15.004719 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-17 00:19:15.004730 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-17 00:19:15.004741 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-17 00:19:15.004752 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-17 00:19:15.004763 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-17 00:19:15.004774 | orchestrator | ++ export ARA=false 2026-03-17 00:19:15.004785 | orchestrator | ++ ARA=false 2026-03-17 00:19:15.004796 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-17 00:19:15.004807 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-17 00:19:15.004818 | orchestrator | ++ export TEMPEST=true 2026-03-17 00:19:15.004828 | orchestrator | ++ TEMPEST=true 2026-03-17 00:19:15.004839 | orchestrator | ++ export IS_ZUUL=true 2026-03-17 00:19:15.004850 | orchestrator | ++ IS_ZUUL=true 2026-03-17 00:19:15.004861 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.64 2026-03-17 00:19:15.004872 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.64 2026-03-17 00:19:15.004883 | orchestrator | ++ export EXTERNAL_API=false 2026-03-17 00:19:15.004894 | orchestrator | ++ EXTERNAL_API=false 2026-03-17 00:19:15.004904 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-17 00:19:15.004915 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-17 00:19:15.004925 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-17 00:19:15.004936 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-17 00:19:15.004961 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-17 00:19:15.004972 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-17 00:19:15.004984 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-17 00:19:15.004995 | orchestrator | ++ export INTERACTIVE=false 2026-03-17 00:19:15.005005 | orchestrator | ++ INTERACTIVE=false 2026-03-17 00:19:15.005016 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-17 00:19:15.005031 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-17 00:19:15.005068 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-17 00:19:15.005080 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-03-17 00:19:15.011005 | orchestrator | + set -e 2026-03-17 00:19:15.011070 | orchestrator | + VERSION=9.5.0 2026-03-17 00:19:15.011093 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-03-17 00:19:15.018235 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-17 00:19:15.018298 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-17 00:19:15.022750 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-17 00:19:15.027342 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-03-17 00:19:15.035367 | orchestrator | + set -e 2026-03-17 00:19:15.035446 | orchestrator | /opt/configuration ~ 2026-03-17 00:19:15.035462 | orchestrator | + pushd /opt/configuration 2026-03-17 00:19:15.035474 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-17 00:19:15.036583 | orchestrator | + source /opt/venv/bin/activate 2026-03-17 00:19:15.038725 | orchestrator | ++ deactivate nondestructive 2026-03-17 00:19:15.038771 | orchestrator | ++ '[' -n '' ']' 2026-03-17 00:19:15.038786 | orchestrator | ++ '[' -n '' ']' 2026-03-17 00:19:15.038825 | orchestrator | ++ hash -r 2026-03-17 00:19:15.038837 | orchestrator | ++ '[' -n '' ']' 2026-03-17 00:19:15.038848 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-17 00:19:15.038859 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-17 00:19:15.038870 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-17 00:19:15.038886 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-17 00:19:15.038905 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-17 00:19:15.038922 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-17 00:19:15.038940 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-17 00:19:15.038959 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-17 00:19:15.038980 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-17 00:19:15.038999 | orchestrator | ++ export PATH 2026-03-17 00:19:15.039020 | orchestrator | ++ '[' -n '' ']' 2026-03-17 00:19:15.039038 | orchestrator | ++ '[' -z '' ']' 2026-03-17 00:19:15.039051 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-17 00:19:15.039062 | orchestrator | ++ PS1='(venv) ' 2026-03-17 00:19:15.039073 | orchestrator | ++ export PS1 2026-03-17 00:19:15.039083 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-17 00:19:15.039094 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-17 00:19:15.039105 | orchestrator | ++ hash -r 2026-03-17 00:19:15.039116 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-03-17 00:19:15.985082 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-03-17 00:19:15.985996 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-03-17 00:19:15.987240 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-03-17 00:19:15.988639 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-03-17 00:19:15.989749 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-03-17 00:19:15.999487 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-03-17 00:19:16.001068 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-03-17 00:19:16.002057 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-03-17 00:19:16.003462 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-03-17 00:19:16.031609 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.6) 2026-03-17 00:19:16.033049 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-03-17 00:19:16.034732 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-03-17 00:19:16.036086 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-03-17 00:19:16.039883 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-03-17 00:19:16.237111 | orchestrator | ++ which gilt 2026-03-17 00:19:16.241466 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-03-17 00:19:16.241505 | orchestrator | + /opt/venv/bin/gilt overlay 2026-03-17 00:19:16.488261 | orchestrator | osism.cfg-generics: 2026-03-17 00:19:16.627835 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-03-17 00:19:16.627930 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-03-17 00:19:16.628298 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-03-17 00:19:16.628504 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-03-17 00:19:17.288129 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-03-17 00:19:17.297546 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-03-17 00:19:17.629982 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-03-17 00:19:17.676611 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-17 00:19:17.676707 | orchestrator | + deactivate 2026-03-17 00:19:17.676722 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-17 00:19:17.676735 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-17 00:19:17.676746 | orchestrator | + export PATH 2026-03-17 00:19:17.676758 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-17 00:19:17.676770 | orchestrator | + '[' -n '' ']' 2026-03-17 00:19:17.676783 | orchestrator | + hash -r 2026-03-17 00:19:17.676794 | orchestrator | + '[' -n '' ']' 2026-03-17 00:19:17.676805 | orchestrator | + unset VIRTUAL_ENV 2026-03-17 00:19:17.676816 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-17 00:19:17.676827 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-17 00:19:17.676838 | orchestrator | + unset -f deactivate 2026-03-17 00:19:17.676849 | orchestrator | ~ 2026-03-17 00:19:17.676860 | orchestrator | + popd 2026-03-17 00:19:17.678290 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-17 00:19:17.678310 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-17 00:19:17.679184 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-17 00:19:17.740887 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-17 00:19:17.741003 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-17 00:19:17.742096 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-17 00:19:17.794443 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-17 00:19:17.794999 | orchestrator | ++ semver 2024.2 2025.1 2026-03-17 00:19:17.846713 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-17 00:19:17.846789 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-17 00:19:17.924145 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-17 00:19:17.924232 | orchestrator | + source /opt/venv/bin/activate 2026-03-17 00:19:17.924242 | orchestrator | ++ deactivate nondestructive 2026-03-17 00:19:17.924289 | orchestrator | ++ '[' -n '' ']' 2026-03-17 00:19:17.924299 | orchestrator | ++ '[' -n '' ']' 2026-03-17 00:19:17.924306 | orchestrator | ++ hash -r 2026-03-17 00:19:17.924462 | orchestrator | ++ '[' -n '' ']' 2026-03-17 00:19:17.924474 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-17 00:19:17.924481 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-17 00:19:17.924489 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-17 00:19:17.924664 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-17 00:19:17.924675 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-17 00:19:17.924683 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-17 00:19:17.924690 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-17 00:19:17.924698 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-17 00:19:17.924722 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-17 00:19:17.924814 | orchestrator | ++ export PATH 2026-03-17 00:19:17.924878 | orchestrator | ++ '[' -n '' ']' 2026-03-17 00:19:17.924919 | orchestrator | ++ '[' -z '' ']' 2026-03-17 00:19:17.924929 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-17 00:19:17.925111 | orchestrator | ++ PS1='(venv) ' 2026-03-17 00:19:17.925121 | orchestrator | ++ export PS1 2026-03-17 00:19:17.925128 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-17 00:19:17.925136 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-17 00:19:17.925143 | orchestrator | ++ hash -r 2026-03-17 00:19:17.925275 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-17 00:19:18.919832 | orchestrator | 2026-03-17 00:19:18.919894 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-17 00:19:18.919901 | orchestrator | 2026-03-17 00:19:18.919906 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-17 00:19:19.464565 | orchestrator | ok: [testbed-manager] 2026-03-17 00:19:19.464660 | orchestrator | 2026-03-17 00:19:19.464677 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-17 00:19:20.481016 | orchestrator | changed: [testbed-manager] 2026-03-17 00:19:20.481135 | orchestrator | 2026-03-17 00:19:20.481219 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-17 00:19:20.481281 | orchestrator | 2026-03-17 00:19:20.481304 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-17 00:19:22.741637 | orchestrator | ok: [testbed-manager] 2026-03-17 00:19:22.741727 | orchestrator | 2026-03-17 00:19:22.741743 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-17 00:19:22.796284 | orchestrator | ok: [testbed-manager] 2026-03-17 00:19:22.796372 | orchestrator | 2026-03-17 00:19:22.796389 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-17 00:19:23.229145 | orchestrator | changed: [testbed-manager] 2026-03-17 00:19:23.229279 | orchestrator | 2026-03-17 00:19:23.229297 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-17 00:19:23.262620 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:19:23.262735 | orchestrator | 2026-03-17 00:19:23.262764 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-17 00:19:23.584486 | orchestrator | changed: [testbed-manager] 2026-03-17 00:19:23.584579 | orchestrator | 2026-03-17 00:19:23.584595 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-17 00:19:23.895636 | orchestrator | ok: [testbed-manager] 2026-03-17 00:19:23.895739 | orchestrator | 2026-03-17 00:19:23.895756 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-17 00:19:23.989596 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:19:23.989683 | orchestrator | 2026-03-17 00:19:23.989699 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-17 00:19:23.989712 | orchestrator | 2026-03-17 00:19:23.989723 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-17 00:19:26.674627 | orchestrator | ok: [testbed-manager] 2026-03-17 00:19:26.674722 | orchestrator | 2026-03-17 00:19:26.674744 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-17 00:19:26.779325 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-17 00:19:26.779425 | orchestrator | 2026-03-17 00:19:26.779442 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-17 00:19:26.834888 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-17 00:19:26.834986 | orchestrator | 2026-03-17 00:19:26.835001 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-17 00:19:27.911742 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-17 00:19:27.911841 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-17 00:19:27.911857 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-17 00:19:27.911870 | orchestrator | 2026-03-17 00:19:27.911888 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-17 00:19:29.655824 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-17 00:19:29.655924 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-17 00:19:29.655940 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-17 00:19:29.655952 | orchestrator | 2026-03-17 00:19:29.655964 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-17 00:19:30.289337 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-17 00:19:30.289388 | orchestrator | changed: [testbed-manager] 2026-03-17 00:19:30.289396 | orchestrator | 2026-03-17 00:19:30.289402 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-17 00:19:30.906441 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-17 00:19:30.906507 | orchestrator | changed: [testbed-manager] 2026-03-17 00:19:30.906518 | orchestrator | 2026-03-17 00:19:30.906525 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-17 00:19:30.955221 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:19:30.955308 | orchestrator | 2026-03-17 00:19:30.955324 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-17 00:19:31.315877 | orchestrator | ok: [testbed-manager] 2026-03-17 00:19:31.315957 | orchestrator | 2026-03-17 00:19:31.315970 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-17 00:19:31.393669 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-17 00:19:31.393753 | orchestrator | 2026-03-17 00:19:31.393769 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-17 00:19:32.480456 | orchestrator | changed: [testbed-manager] 2026-03-17 00:19:32.480528 | orchestrator | 2026-03-17 00:19:32.480539 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-17 00:19:33.344676 | orchestrator | changed: [testbed-manager] 2026-03-17 00:19:33.344774 | orchestrator | 2026-03-17 00:19:33.344787 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-17 00:19:57.073331 | orchestrator | changed: [testbed-manager] 2026-03-17 00:19:57.073403 | orchestrator | 2026-03-17 00:19:57.073412 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-17 00:19:57.125718 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:19:57.125784 | orchestrator | 2026-03-17 00:19:57.125808 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-17 00:19:57.125817 | orchestrator | 2026-03-17 00:19:57.125824 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-17 00:19:58.857530 | orchestrator | ok: [testbed-manager] 2026-03-17 00:19:58.857614 | orchestrator | 2026-03-17 00:19:58.857626 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-17 00:19:58.956849 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-17 00:19:58.956941 | orchestrator | 2026-03-17 00:19:58.956957 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-17 00:19:59.022120 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-17 00:19:59.022287 | orchestrator | 2026-03-17 00:19:59.022303 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-17 00:20:01.341304 | orchestrator | ok: [testbed-manager] 2026-03-17 00:20:01.341403 | orchestrator | 2026-03-17 00:20:01.341420 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-17 00:20:01.391383 | orchestrator | ok: [testbed-manager] 2026-03-17 00:20:01.391470 | orchestrator | 2026-03-17 00:20:01.391485 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-17 00:20:01.505604 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-17 00:20:01.505697 | orchestrator | 2026-03-17 00:20:01.505712 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-17 00:20:04.135483 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-17 00:20:04.135557 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-17 00:20:04.135569 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-17 00:20:04.135599 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-17 00:20:04.135609 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-17 00:20:04.135620 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-17 00:20:04.135629 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-17 00:20:04.135639 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-17 00:20:04.135649 | orchestrator | 2026-03-17 00:20:04.135660 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-17 00:20:04.742582 | orchestrator | changed: [testbed-manager] 2026-03-17 00:20:04.742693 | orchestrator | 2026-03-17 00:20:04.742719 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-17 00:20:05.383125 | orchestrator | changed: [testbed-manager] 2026-03-17 00:20:05.383289 | orchestrator | 2026-03-17 00:20:05.383308 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-17 00:20:05.458307 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-17 00:20:05.458397 | orchestrator | 2026-03-17 00:20:05.458413 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-17 00:20:06.626976 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-17 00:20:06.627058 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-17 00:20:06.627072 | orchestrator | 2026-03-17 00:20:06.627085 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-17 00:20:07.201458 | orchestrator | changed: [testbed-manager] 2026-03-17 00:20:07.201524 | orchestrator | 2026-03-17 00:20:07.201537 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-17 00:20:07.258721 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:20:07.258782 | orchestrator | 2026-03-17 00:20:07.258792 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-17 00:20:07.330809 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-17 00:20:07.330899 | orchestrator | 2026-03-17 00:20:07.330914 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-17 00:20:07.937748 | orchestrator | changed: [testbed-manager] 2026-03-17 00:20:07.937848 | orchestrator | 2026-03-17 00:20:07.937875 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-17 00:20:07.988722 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-17 00:20:07.988798 | orchestrator | 2026-03-17 00:20:07.988812 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-17 00:20:09.284729 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-17 00:20:09.284816 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-17 00:20:09.284831 | orchestrator | changed: [testbed-manager] 2026-03-17 00:20:09.284844 | orchestrator | 2026-03-17 00:20:09.284856 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-17 00:20:09.870427 | orchestrator | changed: [testbed-manager] 2026-03-17 00:20:09.870515 | orchestrator | 2026-03-17 00:20:09.870532 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-17 00:20:09.927075 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:20:09.927165 | orchestrator | 2026-03-17 00:20:09.927190 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-17 00:20:10.019984 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-17 00:20:10.020065 | orchestrator | 2026-03-17 00:20:10.020080 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-17 00:20:10.542108 | orchestrator | changed: [testbed-manager] 2026-03-17 00:20:10.542235 | orchestrator | 2026-03-17 00:20:10.542263 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-17 00:20:10.938471 | orchestrator | changed: [testbed-manager] 2026-03-17 00:20:10.938568 | orchestrator | 2026-03-17 00:20:10.938599 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-17 00:20:12.127367 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-17 00:20:12.127468 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-17 00:20:12.127483 | orchestrator | 2026-03-17 00:20:12.127496 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-17 00:20:12.763302 | orchestrator | changed: [testbed-manager] 2026-03-17 00:20:12.763404 | orchestrator | 2026-03-17 00:20:12.763421 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-17 00:20:13.108741 | orchestrator | ok: [testbed-manager] 2026-03-17 00:20:13.108858 | orchestrator | 2026-03-17 00:20:13.108883 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-17 00:20:13.444776 | orchestrator | changed: [testbed-manager] 2026-03-17 00:20:13.444902 | orchestrator | 2026-03-17 00:20:13.444932 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-17 00:20:13.487273 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:20:13.487362 | orchestrator | 2026-03-17 00:20:13.487378 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-17 00:20:13.557240 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-17 00:20:13.557338 | orchestrator | 2026-03-17 00:20:13.557350 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-17 00:20:13.587117 | orchestrator | ok: [testbed-manager] 2026-03-17 00:20:13.587250 | orchestrator | 2026-03-17 00:20:13.587264 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-17 00:20:15.415837 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-17 00:20:15.415936 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-17 00:20:15.415951 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-17 00:20:15.415962 | orchestrator | 2026-03-17 00:20:15.415974 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-17 00:20:16.070445 | orchestrator | changed: [testbed-manager] 2026-03-17 00:20:16.070542 | orchestrator | 2026-03-17 00:20:16.070559 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-17 00:20:16.720188 | orchestrator | changed: [testbed-manager] 2026-03-17 00:20:16.720289 | orchestrator | 2026-03-17 00:20:16.720305 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-17 00:20:17.385251 | orchestrator | changed: [testbed-manager] 2026-03-17 00:20:17.385364 | orchestrator | 2026-03-17 00:20:17.385382 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-17 00:20:17.452643 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-17 00:20:17.452733 | orchestrator | 2026-03-17 00:20:17.452750 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-17 00:20:17.494268 | orchestrator | ok: [testbed-manager] 2026-03-17 00:20:17.494359 | orchestrator | 2026-03-17 00:20:17.494374 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-17 00:20:18.189343 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-17 00:20:18.189442 | orchestrator | 2026-03-17 00:20:18.189459 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-17 00:20:18.274847 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-17 00:20:18.274933 | orchestrator | 2026-03-17 00:20:18.274946 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-17 00:20:18.958093 | orchestrator | changed: [testbed-manager] 2026-03-17 00:20:18.958216 | orchestrator | 2026-03-17 00:20:18.958235 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-17 00:20:19.528737 | orchestrator | ok: [testbed-manager] 2026-03-17 00:20:19.528856 | orchestrator | 2026-03-17 00:20:19.528875 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-17 00:20:19.572881 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:20:19.572976 | orchestrator | 2026-03-17 00:20:19.572992 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-17 00:20:19.624415 | orchestrator | ok: [testbed-manager] 2026-03-17 00:20:19.624505 | orchestrator | 2026-03-17 00:20:19.624521 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-17 00:20:20.394473 | orchestrator | changed: [testbed-manager] 2026-03-17 00:20:20.394565 | orchestrator | 2026-03-17 00:20:20.394582 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-17 00:21:21.548569 | orchestrator | changed: [testbed-manager] 2026-03-17 00:21:21.548714 | orchestrator | 2026-03-17 00:21:21.548732 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-17 00:21:22.472256 | orchestrator | ok: [testbed-manager] 2026-03-17 00:21:22.472365 | orchestrator | 2026-03-17 00:21:22.472388 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-17 00:21:22.528332 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:21:22.528441 | orchestrator | 2026-03-17 00:21:22.528456 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-17 00:21:28.807452 | orchestrator | changed: [testbed-manager] 2026-03-17 00:21:28.807568 | orchestrator | 2026-03-17 00:21:28.807586 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-17 00:21:28.863934 | orchestrator | ok: [testbed-manager] 2026-03-17 00:21:28.864022 | orchestrator | 2026-03-17 00:21:28.864051 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-17 00:21:28.864081 | orchestrator | 2026-03-17 00:21:28.864098 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-17 00:21:29.012834 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:21:29.012917 | orchestrator | 2026-03-17 00:21:29.012931 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-17 00:22:29.059699 | orchestrator | Pausing for 60 seconds 2026-03-17 00:22:29.059789 | orchestrator | changed: [testbed-manager] 2026-03-17 00:22:29.059801 | orchestrator | 2026-03-17 00:22:29.059811 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-17 00:22:31.602543 | orchestrator | changed: [testbed-manager] 2026-03-17 00:22:31.602643 | orchestrator | 2026-03-17 00:22:31.602659 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-17 00:23:13.077767 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-17 00:23:13.077893 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-17 00:23:13.077917 | orchestrator | changed: [testbed-manager] 2026-03-17 00:23:13.077939 | orchestrator | 2026-03-17 00:23:13.077982 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-17 00:23:22.836041 | orchestrator | changed: [testbed-manager] 2026-03-17 00:23:22.836188 | orchestrator | 2026-03-17 00:23:22.836206 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-17 00:23:22.937670 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-17 00:23:22.937766 | orchestrator | 2026-03-17 00:23:22.937781 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-17 00:23:22.937794 | orchestrator | 2026-03-17 00:23:22.937805 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-17 00:23:22.997893 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:23:22.997978 | orchestrator | 2026-03-17 00:23:22.998000 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-17 00:23:23.070384 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-17 00:23:23.070502 | orchestrator | 2026-03-17 00:23:23.070529 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-17 00:23:23.799337 | orchestrator | changed: [testbed-manager] 2026-03-17 00:23:23.799437 | orchestrator | 2026-03-17 00:23:23.799454 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-17 00:23:26.773704 | orchestrator | ok: [testbed-manager] 2026-03-17 00:23:26.773804 | orchestrator | 2026-03-17 00:23:26.773821 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-17 00:23:26.842934 | orchestrator | ok: [testbed-manager] => { 2026-03-17 00:23:26.843028 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-17 00:23:26.843043 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-17 00:23:26.843056 | orchestrator | "Checking running containers against expected versions...", 2026-03-17 00:23:26.843069 | orchestrator | "", 2026-03-17 00:23:26.843082 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-17 00:23:26.843094 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-17 00:23:26.843184 | orchestrator | " Enabled: true", 2026-03-17 00:23:26.843199 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-17 00:23:26.843210 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:23:26.843221 | orchestrator | "", 2026-03-17 00:23:26.843232 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-17 00:23:26.843244 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-17 00:23:26.843255 | orchestrator | " Enabled: true", 2026-03-17 00:23:26.843291 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-17 00:23:26.843304 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:23:26.843315 | orchestrator | "", 2026-03-17 00:23:26.843325 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-17 00:23:26.843336 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-17 00:23:26.843347 | orchestrator | " Enabled: true", 2026-03-17 00:23:26.843358 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-17 00:23:26.843369 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:23:26.843380 | orchestrator | "", 2026-03-17 00:23:26.843390 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-17 00:23:26.843401 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-17 00:23:26.843412 | orchestrator | " Enabled: true", 2026-03-17 00:23:26.843423 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-17 00:23:26.843433 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:23:26.843444 | orchestrator | "", 2026-03-17 00:23:26.843455 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-17 00:23:26.843468 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-17 00:23:26.843480 | orchestrator | " Enabled: true", 2026-03-17 00:23:26.843493 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-17 00:23:26.843506 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:23:26.843518 | orchestrator | "", 2026-03-17 00:23:26.843530 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-17 00:23:26.843542 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-17 00:23:26.843554 | orchestrator | " Enabled: true", 2026-03-17 00:23:26.843566 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-17 00:23:26.843579 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:23:26.843592 | orchestrator | "", 2026-03-17 00:23:26.843604 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-17 00:23:26.843615 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-17 00:23:26.843626 | orchestrator | " Enabled: true", 2026-03-17 00:23:26.843644 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-17 00:23:26.843662 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:23:26.843679 | orchestrator | "", 2026-03-17 00:23:26.843696 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-17 00:23:26.843715 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-17 00:23:26.843735 | orchestrator | " Enabled: true", 2026-03-17 00:23:26.843754 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-17 00:23:26.843775 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:23:26.843796 | orchestrator | "", 2026-03-17 00:23:26.843817 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-17 00:23:26.843838 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-17 00:23:26.843860 | orchestrator | " Enabled: true", 2026-03-17 00:23:26.843881 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-17 00:23:26.843902 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:23:26.843923 | orchestrator | "", 2026-03-17 00:23:26.843942 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-17 00:23:26.843963 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-17 00:23:26.843984 | orchestrator | " Enabled: true", 2026-03-17 00:23:26.844006 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-17 00:23:26.844026 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:23:26.844047 | orchestrator | "", 2026-03-17 00:23:26.844067 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-17 00:23:26.844089 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-17 00:23:26.844137 | orchestrator | " Enabled: true", 2026-03-17 00:23:26.844173 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-17 00:23:26.844193 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:23:26.844211 | orchestrator | "", 2026-03-17 00:23:26.844229 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-17 00:23:26.844241 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-17 00:23:26.844251 | orchestrator | " Enabled: true", 2026-03-17 00:23:26.844262 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-17 00:23:26.844273 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:23:26.844283 | orchestrator | "", 2026-03-17 00:23:26.844295 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-17 00:23:26.844306 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-17 00:23:26.844317 | orchestrator | " Enabled: true", 2026-03-17 00:23:26.844327 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-17 00:23:26.844338 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:23:26.844349 | orchestrator | "", 2026-03-17 00:23:26.844359 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-17 00:23:26.844370 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-17 00:23:26.844380 | orchestrator | " Enabled: true", 2026-03-17 00:23:26.844391 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-17 00:23:26.844424 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:23:26.844435 | orchestrator | "", 2026-03-17 00:23:26.844446 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-17 00:23:26.844457 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-17 00:23:26.844467 | orchestrator | " Enabled: true", 2026-03-17 00:23:26.844489 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-17 00:23:26.844500 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:23:26.844511 | orchestrator | "", 2026-03-17 00:23:26.844522 | orchestrator | "=== Summary ===", 2026-03-17 00:23:26.844532 | orchestrator | "Errors (version mismatches): 0", 2026-03-17 00:23:26.844543 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-17 00:23:26.844554 | orchestrator | "", 2026-03-17 00:23:26.844565 | orchestrator | "✅ All running containers match expected versions!" 2026-03-17 00:23:26.844576 | orchestrator | ] 2026-03-17 00:23:26.844587 | orchestrator | } 2026-03-17 00:23:26.844598 | orchestrator | 2026-03-17 00:23:26.844608 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-17 00:23:26.889460 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:23:26.889557 | orchestrator | 2026-03-17 00:23:26.889569 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:23:26.889579 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-17 00:23:26.889588 | orchestrator | 2026-03-17 00:23:26.961715 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-17 00:23:26.961803 | orchestrator | + deactivate 2026-03-17 00:23:26.961817 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-17 00:23:26.961829 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-17 00:23:26.961838 | orchestrator | + export PATH 2026-03-17 00:23:26.961848 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-17 00:23:26.961859 | orchestrator | + '[' -n '' ']' 2026-03-17 00:23:26.961869 | orchestrator | + hash -r 2026-03-17 00:23:26.961879 | orchestrator | + '[' -n '' ']' 2026-03-17 00:23:26.961888 | orchestrator | + unset VIRTUAL_ENV 2026-03-17 00:23:26.961898 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-17 00:23:26.961908 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-17 00:23:26.961918 | orchestrator | + unset -f deactivate 2026-03-17 00:23:26.961928 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-17 00:23:26.967089 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-17 00:23:26.967137 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-17 00:23:26.967148 | orchestrator | + local max_attempts=60 2026-03-17 00:23:26.967158 | orchestrator | + local name=ceph-ansible 2026-03-17 00:23:26.967192 | orchestrator | + local attempt_num=1 2026-03-17 00:23:26.967697 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:23:26.997548 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:23:26.997634 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-17 00:23:26.997648 | orchestrator | + local max_attempts=60 2026-03-17 00:23:26.997661 | orchestrator | + local name=kolla-ansible 2026-03-17 00:23:26.997672 | orchestrator | + local attempt_num=1 2026-03-17 00:23:26.998073 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-17 00:23:27.027535 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:23:27.027626 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-17 00:23:27.027641 | orchestrator | + local max_attempts=60 2026-03-17 00:23:27.027654 | orchestrator | + local name=osism-ansible 2026-03-17 00:23:27.027665 | orchestrator | + local attempt_num=1 2026-03-17 00:23:27.028423 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-17 00:23:27.056397 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:23:27.056501 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-17 00:23:27.056524 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-17 00:23:27.696030 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-17 00:23:27.874366 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-17 00:23:27.874461 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-03-17 00:23:27.874475 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-03-17 00:23:27.874487 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-03-17 00:23:27.874499 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-03-17 00:23:27.874534 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2026-03-17 00:23:27.874546 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2026-03-17 00:23:27.874557 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 56 seconds (healthy) 2026-03-17 00:23:27.874567 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2026-03-17 00:23:27.874578 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2026-03-17 00:23:27.874595 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2026-03-17 00:23:27.874614 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2026-03-17 00:23:27.874632 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-03-17 00:23:27.874679 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-03-17 00:23:27.874692 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-03-17 00:23:27.874703 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2026-03-17 00:23:27.879213 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-17 00:23:27.923497 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-17 00:23:27.923590 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-17 00:23:27.927754 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-17 00:23:39.885683 | orchestrator | 2026-03-17 00:23:39 | INFO  | Task 976b5d94-fbdb-4078-8be3-2de1c3d0a0c2 (resolvconf) was prepared for execution. 2026-03-17 00:23:39.885793 | orchestrator | 2026-03-17 00:23:39 | INFO  | It takes a moment until task 976b5d94-fbdb-4078-8be3-2de1c3d0a0c2 (resolvconf) has been started and output is visible here. 2026-03-17 00:23:52.951473 | orchestrator | 2026-03-17 00:23:52.951580 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-17 00:23:52.951597 | orchestrator | 2026-03-17 00:23:52.951608 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-17 00:23:52.951620 | orchestrator | Tuesday 17 March 2026 00:23:43 +0000 (0:00:00.134) 0:00:00.134 ********* 2026-03-17 00:23:52.951631 | orchestrator | ok: [testbed-manager] 2026-03-17 00:23:52.951642 | orchestrator | 2026-03-17 00:23:52.951654 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-17 00:23:52.951665 | orchestrator | Tuesday 17 March 2026 00:23:47 +0000 (0:00:03.279) 0:00:03.414 ********* 2026-03-17 00:23:52.951676 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:23:52.951688 | orchestrator | 2026-03-17 00:23:52.951699 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-17 00:23:52.951710 | orchestrator | Tuesday 17 March 2026 00:23:47 +0000 (0:00:00.067) 0:00:03.481 ********* 2026-03-17 00:23:52.951721 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-17 00:23:52.951733 | orchestrator | 2026-03-17 00:23:52.951744 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-17 00:23:52.951754 | orchestrator | Tuesday 17 March 2026 00:23:47 +0000 (0:00:00.086) 0:00:03.568 ********* 2026-03-17 00:23:52.951785 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-17 00:23:52.951797 | orchestrator | 2026-03-17 00:23:52.951808 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-17 00:23:52.951819 | orchestrator | Tuesday 17 March 2026 00:23:47 +0000 (0:00:00.072) 0:00:03.640 ********* 2026-03-17 00:23:52.951830 | orchestrator | ok: [testbed-manager] 2026-03-17 00:23:52.951840 | orchestrator | 2026-03-17 00:23:52.951851 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-17 00:23:52.951862 | orchestrator | Tuesday 17 March 2026 00:23:48 +0000 (0:00:01.061) 0:00:04.702 ********* 2026-03-17 00:23:52.951873 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:23:52.951884 | orchestrator | 2026-03-17 00:23:52.951895 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-17 00:23:52.951905 | orchestrator | Tuesday 17 March 2026 00:23:48 +0000 (0:00:00.045) 0:00:04.747 ********* 2026-03-17 00:23:52.951916 | orchestrator | ok: [testbed-manager] 2026-03-17 00:23:52.951950 | orchestrator | 2026-03-17 00:23:52.951962 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-17 00:23:52.951973 | orchestrator | Tuesday 17 March 2026 00:23:48 +0000 (0:00:00.475) 0:00:05.222 ********* 2026-03-17 00:23:52.951983 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:23:52.951994 | orchestrator | 2026-03-17 00:23:52.952005 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-17 00:23:52.952018 | orchestrator | Tuesday 17 March 2026 00:23:48 +0000 (0:00:00.080) 0:00:05.302 ********* 2026-03-17 00:23:52.952031 | orchestrator | changed: [testbed-manager] 2026-03-17 00:23:52.952046 | orchestrator | 2026-03-17 00:23:52.952064 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-17 00:23:52.952083 | orchestrator | Tuesday 17 March 2026 00:23:49 +0000 (0:00:00.514) 0:00:05.817 ********* 2026-03-17 00:23:52.952143 | orchestrator | changed: [testbed-manager] 2026-03-17 00:23:52.952164 | orchestrator | 2026-03-17 00:23:52.952182 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-17 00:23:52.952201 | orchestrator | Tuesday 17 March 2026 00:23:50 +0000 (0:00:01.040) 0:00:06.857 ********* 2026-03-17 00:23:52.952218 | orchestrator | ok: [testbed-manager] 2026-03-17 00:23:52.952237 | orchestrator | 2026-03-17 00:23:52.952255 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-17 00:23:52.952274 | orchestrator | Tuesday 17 March 2026 00:23:51 +0000 (0:00:00.958) 0:00:07.815 ********* 2026-03-17 00:23:52.952295 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-17 00:23:52.952313 | orchestrator | 2026-03-17 00:23:52.952331 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-17 00:23:52.952349 | orchestrator | Tuesday 17 March 2026 00:23:51 +0000 (0:00:00.070) 0:00:07.886 ********* 2026-03-17 00:23:52.952367 | orchestrator | changed: [testbed-manager] 2026-03-17 00:23:52.952384 | orchestrator | 2026-03-17 00:23:52.952401 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:23:52.952420 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-17 00:23:52.952438 | orchestrator | 2026-03-17 00:23:52.952457 | orchestrator | 2026-03-17 00:23:52.952476 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:23:52.952494 | orchestrator | Tuesday 17 March 2026 00:23:52 +0000 (0:00:01.150) 0:00:09.037 ********* 2026-03-17 00:23:52.952512 | orchestrator | =============================================================================== 2026-03-17 00:23:52.952529 | orchestrator | Gathering Facts --------------------------------------------------------- 3.28s 2026-03-17 00:23:52.952547 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.15s 2026-03-17 00:23:52.952565 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.06s 2026-03-17 00:23:52.952583 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.04s 2026-03-17 00:23:52.952602 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.96s 2026-03-17 00:23:52.952621 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.51s 2026-03-17 00:23:52.952667 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.48s 2026-03-17 00:23:52.952688 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-03-17 00:23:52.952707 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-03-17 00:23:52.952726 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2026-03-17 00:23:52.952744 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2026-03-17 00:23:52.952764 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-03-17 00:23:52.952791 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2026-03-17 00:23:53.256293 | orchestrator | + osism apply sshconfig 2026-03-17 00:24:05.323308 | orchestrator | 2026-03-17 00:24:05 | INFO  | Task fe934e1a-836a-4a67-806d-1994e912e592 (sshconfig) was prepared for execution. 2026-03-17 00:24:05.323441 | orchestrator | 2026-03-17 00:24:05 | INFO  | It takes a moment until task fe934e1a-836a-4a67-806d-1994e912e592 (sshconfig) has been started and output is visible here. 2026-03-17 00:24:16.129331 | orchestrator | 2026-03-17 00:24:16.129447 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-17 00:24:16.129465 | orchestrator | 2026-03-17 00:24:16.129512 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-17 00:24:16.129526 | orchestrator | Tuesday 17 March 2026 00:24:09 +0000 (0:00:00.116) 0:00:00.116 ********* 2026-03-17 00:24:16.129538 | orchestrator | ok: [testbed-manager] 2026-03-17 00:24:16.129551 | orchestrator | 2026-03-17 00:24:16.129562 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-17 00:24:16.129573 | orchestrator | Tuesday 17 March 2026 00:24:09 +0000 (0:00:00.506) 0:00:00.622 ********* 2026-03-17 00:24:16.129584 | orchestrator | changed: [testbed-manager] 2026-03-17 00:24:16.129596 | orchestrator | 2026-03-17 00:24:16.129607 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-17 00:24:16.129618 | orchestrator | Tuesday 17 March 2026 00:24:10 +0000 (0:00:00.416) 0:00:01.038 ********* 2026-03-17 00:24:16.129629 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-17 00:24:16.129640 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-17 00:24:16.129652 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-17 00:24:16.129662 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-17 00:24:16.129673 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-17 00:24:16.129684 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-17 00:24:16.129695 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-17 00:24:16.129705 | orchestrator | 2026-03-17 00:24:16.129716 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-17 00:24:16.129727 | orchestrator | Tuesday 17 March 2026 00:24:15 +0000 (0:00:05.147) 0:00:06.185 ********* 2026-03-17 00:24:16.129738 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:24:16.129749 | orchestrator | 2026-03-17 00:24:16.129760 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-17 00:24:16.129771 | orchestrator | Tuesday 17 March 2026 00:24:15 +0000 (0:00:00.071) 0:00:06.257 ********* 2026-03-17 00:24:16.129781 | orchestrator | changed: [testbed-manager] 2026-03-17 00:24:16.129792 | orchestrator | 2026-03-17 00:24:16.129803 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:24:16.129815 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:24:16.129827 | orchestrator | 2026-03-17 00:24:16.129838 | orchestrator | 2026-03-17 00:24:16.129849 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:24:16.129860 | orchestrator | Tuesday 17 March 2026 00:24:15 +0000 (0:00:00.541) 0:00:06.799 ********* 2026-03-17 00:24:16.129871 | orchestrator | =============================================================================== 2026-03-17 00:24:16.129881 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.15s 2026-03-17 00:24:16.129892 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.54s 2026-03-17 00:24:16.129903 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.51s 2026-03-17 00:24:16.129914 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.42s 2026-03-17 00:24:16.129925 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2026-03-17 00:24:16.383268 | orchestrator | + osism apply known-hosts 2026-03-17 00:24:28.350524 | orchestrator | 2026-03-17 00:24:28 | INFO  | Task 1455d4c1-7cc7-4c7c-ba70-8df1c35bb670 (known-hosts) was prepared for execution. 2026-03-17 00:24:28.350640 | orchestrator | 2026-03-17 00:24:28 | INFO  | It takes a moment until task 1455d4c1-7cc7-4c7c-ba70-8df1c35bb670 (known-hosts) has been started and output is visible here. 2026-03-17 00:24:44.762922 | orchestrator | 2026-03-17 00:24:44.763035 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-17 00:24:44.763054 | orchestrator | 2026-03-17 00:24:44.763066 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-17 00:24:44.763079 | orchestrator | Tuesday 17 March 2026 00:24:32 +0000 (0:00:00.159) 0:00:00.159 ********* 2026-03-17 00:24:44.763139 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-17 00:24:44.763153 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-17 00:24:44.763164 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-17 00:24:44.763176 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-17 00:24:44.763187 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-17 00:24:44.763198 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-17 00:24:44.763209 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-17 00:24:44.763220 | orchestrator | 2026-03-17 00:24:44.763231 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-17 00:24:44.763242 | orchestrator | Tuesday 17 March 2026 00:24:38 +0000 (0:00:05.852) 0:00:06.011 ********* 2026-03-17 00:24:44.763254 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-17 00:24:44.763267 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-17 00:24:44.763278 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-17 00:24:44.763289 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-17 00:24:44.763300 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-17 00:24:44.763320 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-17 00:24:44.763331 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-17 00:24:44.763342 | orchestrator | 2026-03-17 00:24:44.763353 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:24:44.763364 | orchestrator | Tuesday 17 March 2026 00:24:38 +0000 (0:00:00.148) 0:00:06.160 ********* 2026-03-17 00:24:44.763375 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB6r9jLONQV3MhcbiuW+ZXmp2PnFW8Mq//spJMll5th9a9d3RGf3WOgZWhsEZ1mpiCtGoucAxA/41WBQQGkWBVI=) 2026-03-17 00:24:44.763396 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDG8K2LiQZpLf3UKfNfK5WWJUkSIXdy8Zed3Q7qI7QleAXlYDFCJioilhoNpiWl7Q+MeXJ7+QAVSC2wNShvhaLeXIpjtbJE6306jhcysFAexRV3QdDeIY7K7M8E9/Wer/NBG3ke7JnYZJ2Lbk2xS75anSBVu/hN0P3jae+FyL9IomVdjp6y+n8mgyyzEsA/B3oorEt0UGvila3CVwvIhs33XVHrswUzdHSBojNeD5SfesqmMZ1X+Xb2QGbG/70DdO3sB20vXPGjka2RgyUDVbw9aiYQAFhIsJotmMRfEFZOfxjejXKuJ9K4qHRRT7mm5MEVSAtP6uqudJJbuEGwPStH5r2p7UKUrBnZCd3AYPmK05RdbSQdNKpXgnzaSLd8FI00KApf+K96crYhfC7yVEVLFe8pdW1whfYunFGQUMMDUAIGLUl6iaublfoKU4tPgKHRnKcC6exqtm4KSU3v62J+0eUm+YAr/cqZkNuuJX9juP0hRnYvih1IDB+yq9X5yO0=) 2026-03-17 00:24:44.763428 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBadGpmXMP1x2Ir8hO76YuKMbfLUP1pmbK7x78gkIP3g) 2026-03-17 00:24:44.763443 | orchestrator | 2026-03-17 00:24:44.763456 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:24:44.763469 | orchestrator | Tuesday 17 March 2026 00:24:39 +0000 (0:00:01.130) 0:00:07.290 ********* 2026-03-17 00:24:44.763499 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC87P7hfleBb6eBhWyL3W5eIGgLxjBGhQ30z7yhO1T2L4fRXJEO7nscY/2zziP6EuKaynsZt7jSmsA1/MIGRfxQWqZj3mH8JR0dEP+w1+lNjMF6pLt8byj9l5H3B0U6Sih3G15Tq/oyw4vf2nMVtPMnw0v7Qtbz+qE36Wdh1cVgVNKbkQqN2wyEg2Guuvcj+/1wgMjn6lSiSVDJIxW41WqJ8fer9IBDz2fY7cRGCvF+67VM0poK2mmqPlFyGcFuVIZThKAtIMa6y6cIcPAT3dqmI7xxg0ORnvVtlMSmLrgdVcXPQ46Vawn74zMrA3kI5H/pt3VA+oVUnr/9xgPrdVwtLwFPnq6mfPs/tiuv7/VqekxWTH8uwL8xnrBvkQWxXvBuJKcSxVZWsHCGNWxqxMIZ78nOqwYQI6U8Bafe859+kfoSyvspmhvjiNP7tgcCzMptAGmtMi6myVsCqA7/y+bYyM7IWvnz0mvZchrgjRGxBNSP2JQRbABsuk+FDRxOvP0=) 2026-03-17 00:24:44.763514 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI4eKej0r2Cr1VUCZwE8hI4mP7k39YDzXICW3TK2Bcp5Mu1cD18FUVtKslRA1BqbRmEyBon0YJ/jM8RRlYNdN2M=) 2026-03-17 00:24:44.763527 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEtAI4Bj53HOjHECbpjU3GGHlYR6/CZ7Ch6S4ukn/tgC) 2026-03-17 00:24:44.763539 | orchestrator | 2026-03-17 00:24:44.763551 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:24:44.763563 | orchestrator | Tuesday 17 March 2026 00:24:40 +0000 (0:00:01.041) 0:00:08.332 ********* 2026-03-17 00:24:44.763576 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpZEZAmcV0O0mYJFNXgnySksiqDWrmNd5C/PdLVVFJ93h4d6BlEQuKZ2IuRnuZwvv9bPL4LtK46bctmc+qf4+WuWLVBBnl33D/0ut3yxgQwGzx9rAfMTyM3qQdxz/fjn547JK+Ubk6oeKWundNyoDV+3LIYtKATSjDA0p61zupJ7P2nnBfxggGY/wb/grjqhpQVtnjzolHqhW1B+XTVerANVmVtqj8I99mYnajNHPPWl7ACELyFZCLULRg9Uj+FoHm/mOS+YpdhZeioIfrAGr+b/AXl0A536Fs0wDGI9xMgn5uvX9+lAbeuLPvRgj6aCj2yXsVPAEHvh2vDp/wSIbON+uDfCELSN1SGWArTlPTPrMb83D8AprBx+Ks7CykxNhvXCSLkXpwyAmpeFDVJWFWu2NIBo1m6ENAUjDMAH1csQf/WN+A3eL65Orr+lTyCzeIt/pWGvd2BmxowUTcXtzzCXo2dWtATORqFbedG8DiuR+OZgK6vBlyc/EyTe9zOXs=) 2026-03-17 00:24:44.763589 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLUs9MHHH8eZ7tfjr9jhFkQdlTRFtYcSmUD7cNSJ12w29nDxP6aYBeKdjyAYFrLWPEXCTrS3oOWwV2GrqO7JaZ0=) 2026-03-17 00:24:44.763601 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINau0eo932OpVVBIVQVQhod8xgc9HHJRI6cc7Y/CAd/M) 2026-03-17 00:24:44.763613 | orchestrator | 2026-03-17 00:24:44.763626 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:24:44.763638 | orchestrator | Tuesday 17 March 2026 00:24:41 +0000 (0:00:01.038) 0:00:09.370 ********* 2026-03-17 00:24:44.763651 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/D5TjooFlHXZXYmYUfpq79LGgxJhh360pq4q5LSQOUqHOExfAtdMPUZVdMsNd4n9YWgJZvZdpohO9Wzbq+QvlIaDnqOV3RSYt70sokJvdhYofY2Ag3iWLwOStRP1OGI7mmN3MA4u9SK15lS86yTo3Et/2Dcq2UGmqhYZwtCz/LGyiqbk4fufAKZwnTxPrNu80rM6+/v8Fi0zwqf0jD0npLGMyX0NMiod32vEFlWHqEEcRqTUvdUBk7NS1CORIIWEzZEx3XQDp0TObjI1tqoISughoex/hcsMQIHqEhCRFdz6Z7TqRABJXXGyZKnkU7nqUZZjsytSxLuPqIORBOKMuEuQ6Cb4n88zZfqVz5cuSYDpazTNykH4DsRWTR4YKtJ2meiGjwZvVoRTUvAJrnWf4ieYNsdomMthdaqgUpk7EK8lZJ6ltpz7kT9SJkZ8LH4ZSxovwpY0c9vHjN2e5TfsNxgKofw0Pj8psDv4NghdzAmTQ/l/YWsy2hOHpczgAg+s=) 2026-03-17 00:24:44.763663 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHVxG+yLDy6+1VzkP2BXBmyBbqLOX0Ou73Fi+oEvcvPI0YiAbBFDSVK/q7ge4z4LyD5iJEMxQu2Q7t6e9rQNYRI=) 2026-03-17 00:24:44.763682 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAuJeqs9Wuy+vSsvJRCcmMnNZdZwcCftdQ9/qPb4xQed) 2026-03-17 00:24:44.763694 | orchestrator | 2026-03-17 00:24:44.763707 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:24:44.763719 | orchestrator | Tuesday 17 March 2026 00:24:42 +0000 (0:00:01.007) 0:00:10.378 ********* 2026-03-17 00:24:44.763822 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLQlJltgf5mXGv6JimZ/z3i+MpDeoFDQubfbz33Yg46LH4nuQBUDpBzFwEyN+0B2GdGzwQaZyveyF749m6QNSCM=) 2026-03-17 00:24:44.763845 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyXOdKpLXlbwsioM+rkEEhK5dar6t/VW2kcXjPBGMXzHqI/vHsyTA/zqOj2fGgiyXHP9LdXOOELUrsjgLziLchIA9Ee4sRJV17UZ4IchCyDJClmuHeKUpZ4Bx0toVGjay3YPfeSihJ9Kd6ovfyg/5lnt67kAZaCQ6gh04GCmFPZxvXJtNeR9d8dbgN6kaa1cIC7usQEfh/OOUj4ZDIJoSZXgGlL/aMsZHnu4kcsiZlBge/JHQ0tbHVciGuiJ2uxBcklH64INx4mX+hWHWvafzorynEoh2n1HRvm1hSBpwM/jyFXQ6ap5o44kMMn+M9GUY6gXURQgMRPljsrjIVZYKE3AGHcKFF8oT024d5Bkmcx1PYdg8fxE9cAVFgxtb8HctBUVoSE5qgBKzURtH/8Yoqqvyb3x6YV0K/64adshKDytsv1y3zygqDV8jHqpXpocNbMSozWClOwLmUieuAgyiYv0jn0hUtZBZsB0UlpeX72g9dpIZgGgJtbWOUQpn48YE=) 2026-03-17 00:24:44.763863 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEcZwbAamS2mGu+2ru+1Ds9ol6UbXwYRT6Qzc//vqtt+) 2026-03-17 00:24:44.763874 | orchestrator | 2026-03-17 00:24:44.763885 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:24:44.763895 | orchestrator | Tuesday 17 March 2026 00:24:43 +0000 (0:00:01.028) 0:00:11.406 ********* 2026-03-17 00:24:44.763915 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJSEZmutBACypBkKmot+FNCnfYStnRLg4glKi7DvDGAY) 2026-03-17 00:24:55.271298 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChGq2msa5wj+dVLpi+ZanT7lEJRNUd3gibLWTL0YMfPBPo+Sx5aY02DdOWuWdxwv+FW05xDXCza+d37Qxwbk0xT/Zp2TvaRIzCXwVqoQJE0SPqNIByxarPiEQhqpuQvh5rlPjg14mIC+R24M2SJZMoA4WLqBotChEbSZdsW3yq8XXLB3wnfDtn/uedFN4o6NeGNJwwZKP+tLA25uV7OETpgrUgG0hQXG2hNGGqVTSHmNatMtzdg9HjV00/C/7j82nrdE7BRLN+EQIZNIKi8aDuGFa+Veq89eyM9cGIEkdU9ht8uPeA9XiYfBsdZVSoFi/w53n5uNOGIQzA8G8tFUFi9j4acOAXi49F5bFLns9ZWBX2rOU72OXREJXWUdk0RbtiHKTb3lvdGvIyA9conN1kgpSSyHy/cDNKrGkpRx0ElLM1boXnUyyg1/Aslad8e/s6krd1J/sPEdOyE6OULF4Mlzh4GvxDH5Oglkt/AJ23MMZt/s2fJ7guv7F+8ogJo+U=) 2026-03-17 00:24:55.271421 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJdcLd72Yc8mgDxKqO5K8w/sOiTLdn/FrvWX9cQZJce+PhMHfgR2/2m7iY0JM+Mmlv1Lb/eYQptABVJ6LjSoK9c=) 2026-03-17 00:24:55.271441 | orchestrator | 2026-03-17 00:24:55.271455 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:24:55.271467 | orchestrator | Tuesday 17 March 2026 00:24:44 +0000 (0:00:01.015) 0:00:12.421 ********* 2026-03-17 00:24:55.271478 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGfxbwoCKgKsY3I4C2hhbHSyfIs1VX+BUkXgNpFIJ6CfbhPWEMRo7QFf6Zgk/d61F8SEm8WNakrZESDUkvZIoP4=) 2026-03-17 00:24:55.271490 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyDwIb42fGeRDDCQJTPXc4mW4wfJYwvFgsYOOQA+SAAWYzNnx02EhojmXdezvgdqYxHZUTMxZxxPYzYhuwf9hL404NyQxorKyYE1coQxXajyhl4mUtZNrgTHSBXVCJfKcXZ5hOgZWy6cjQ2WiPWPgWExj1trQ/GpkHnzj1QgroK8CFRO+agSncKV1ZZNhQRX/mnLEy2BN4a5y5qiiZOtmIHLth0LHCfbcNlKlukz6XufBNOzm+3ksg4MiwY6DBlpVz90PoT9z0SYcnruFMHEFt5iy7YFGOklWFf5+V5TkmcZeaf9aeY1QNc5gEIqZQOBqezl/Ei1tp6d07ejWLhkzBPRH4R+u1STi2HWh40Bgb5yhSnIST2BWaqBSfdBQhVDO+8YVEZckClSstrNHHD2cFLcL8iy3gRDqKPlx37W1Yx6uWkUL4386ECCOqxyaefiqQCSexB2p8mXE0FAYk7ds1IU32LVSbwoOf0089VMd/ufnGl7CiU1H+hZjXDZ0DjVk=) 2026-03-17 00:24:55.271523 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINpANNtYHO1TnRhTU8iUEFOG0xL53dmGysiqR1VPT20s) 2026-03-17 00:24:55.271535 | orchestrator | 2026-03-17 00:24:55.271546 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-17 00:24:55.271558 | orchestrator | Tuesday 17 March 2026 00:24:45 +0000 (0:00:01.016) 0:00:13.438 ********* 2026-03-17 00:24:55.271569 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-17 00:24:55.271580 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-17 00:24:55.271591 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-17 00:24:55.271602 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-17 00:24:55.271612 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-17 00:24:55.271623 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-17 00:24:55.271633 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-17 00:24:55.271644 | orchestrator | 2026-03-17 00:24:55.271654 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-17 00:24:55.271666 | orchestrator | Tuesday 17 March 2026 00:24:50 +0000 (0:00:05.156) 0:00:18.594 ********* 2026-03-17 00:24:55.271678 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-17 00:24:55.271691 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-17 00:24:55.271702 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-17 00:24:55.271712 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-17 00:24:55.271723 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-17 00:24:55.271734 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-17 00:24:55.271745 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-17 00:24:55.271755 | orchestrator | 2026-03-17 00:24:55.271779 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:24:55.271791 | orchestrator | Tuesday 17 March 2026 00:24:51 +0000 (0:00:00.168) 0:00:18.763 ********* 2026-03-17 00:24:55.271802 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBadGpmXMP1x2Ir8hO76YuKMbfLUP1pmbK7x78gkIP3g) 2026-03-17 00:24:55.271828 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDG8K2LiQZpLf3UKfNfK5WWJUkSIXdy8Zed3Q7qI7QleAXlYDFCJioilhoNpiWl7Q+MeXJ7+QAVSC2wNShvhaLeXIpjtbJE6306jhcysFAexRV3QdDeIY7K7M8E9/Wer/NBG3ke7JnYZJ2Lbk2xS75anSBVu/hN0P3jae+FyL9IomVdjp6y+n8mgyyzEsA/B3oorEt0UGvila3CVwvIhs33XVHrswUzdHSBojNeD5SfesqmMZ1X+Xb2QGbG/70DdO3sB20vXPGjka2RgyUDVbw9aiYQAFhIsJotmMRfEFZOfxjejXKuJ9K4qHRRT7mm5MEVSAtP6uqudJJbuEGwPStH5r2p7UKUrBnZCd3AYPmK05RdbSQdNKpXgnzaSLd8FI00KApf+K96crYhfC7yVEVLFe8pdW1whfYunFGQUMMDUAIGLUl6iaublfoKU4tPgKHRnKcC6exqtm4KSU3v62J+0eUm+YAr/cqZkNuuJX9juP0hRnYvih1IDB+yq9X5yO0=) 2026-03-17 00:24:55.271840 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB6r9jLONQV3MhcbiuW+ZXmp2PnFW8Mq//spJMll5th9a9d3RGf3WOgZWhsEZ1mpiCtGoucAxA/41WBQQGkWBVI=) 2026-03-17 00:24:55.271859 | orchestrator | 2026-03-17 00:24:55.271870 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:24:55.271881 | orchestrator | Tuesday 17 March 2026 00:24:52 +0000 (0:00:01.050) 0:00:19.813 ********* 2026-03-17 00:24:55.271893 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC87P7hfleBb6eBhWyL3W5eIGgLxjBGhQ30z7yhO1T2L4fRXJEO7nscY/2zziP6EuKaynsZt7jSmsA1/MIGRfxQWqZj3mH8JR0dEP+w1+lNjMF6pLt8byj9l5H3B0U6Sih3G15Tq/oyw4vf2nMVtPMnw0v7Qtbz+qE36Wdh1cVgVNKbkQqN2wyEg2Guuvcj+/1wgMjn6lSiSVDJIxW41WqJ8fer9IBDz2fY7cRGCvF+67VM0poK2mmqPlFyGcFuVIZThKAtIMa6y6cIcPAT3dqmI7xxg0ORnvVtlMSmLrgdVcXPQ46Vawn74zMrA3kI5H/pt3VA+oVUnr/9xgPrdVwtLwFPnq6mfPs/tiuv7/VqekxWTH8uwL8xnrBvkQWxXvBuJKcSxVZWsHCGNWxqxMIZ78nOqwYQI6U8Bafe859+kfoSyvspmhvjiNP7tgcCzMptAGmtMi6myVsCqA7/y+bYyM7IWvnz0mvZchrgjRGxBNSP2JQRbABsuk+FDRxOvP0=) 2026-03-17 00:24:55.271905 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI4eKej0r2Cr1VUCZwE8hI4mP7k39YDzXICW3TK2Bcp5Mu1cD18FUVtKslRA1BqbRmEyBon0YJ/jM8RRlYNdN2M=) 2026-03-17 00:24:55.271916 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEtAI4Bj53HOjHECbpjU3GGHlYR6/CZ7Ch6S4ukn/tgC) 2026-03-17 00:24:55.271926 | orchestrator | 2026-03-17 00:24:55.271937 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:24:55.271948 | orchestrator | Tuesday 17 March 2026 00:24:53 +0000 (0:00:01.044) 0:00:20.858 ********* 2026-03-17 00:24:55.271959 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpZEZAmcV0O0mYJFNXgnySksiqDWrmNd5C/PdLVVFJ93h4d6BlEQuKZ2IuRnuZwvv9bPL4LtK46bctmc+qf4+WuWLVBBnl33D/0ut3yxgQwGzx9rAfMTyM3qQdxz/fjn547JK+Ubk6oeKWundNyoDV+3LIYtKATSjDA0p61zupJ7P2nnBfxggGY/wb/grjqhpQVtnjzolHqhW1B+XTVerANVmVtqj8I99mYnajNHPPWl7ACELyFZCLULRg9Uj+FoHm/mOS+YpdhZeioIfrAGr+b/AXl0A536Fs0wDGI9xMgn5uvX9+lAbeuLPvRgj6aCj2yXsVPAEHvh2vDp/wSIbON+uDfCELSN1SGWArTlPTPrMb83D8AprBx+Ks7CykxNhvXCSLkXpwyAmpeFDVJWFWu2NIBo1m6ENAUjDMAH1csQf/WN+A3eL65Orr+lTyCzeIt/pWGvd2BmxowUTcXtzzCXo2dWtATORqFbedG8DiuR+OZgK6vBlyc/EyTe9zOXs=) 2026-03-17 00:24:55.271970 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLUs9MHHH8eZ7tfjr9jhFkQdlTRFtYcSmUD7cNSJ12w29nDxP6aYBeKdjyAYFrLWPEXCTrS3oOWwV2GrqO7JaZ0=) 2026-03-17 00:24:55.271981 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINau0eo932OpVVBIVQVQhod8xgc9HHJRI6cc7Y/CAd/M) 2026-03-17 00:24:55.271992 | orchestrator | 2026-03-17 00:24:55.272002 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:24:55.272013 | orchestrator | Tuesday 17 March 2026 00:24:54 +0000 (0:00:01.021) 0:00:21.879 ********* 2026-03-17 00:24:55.272024 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHVxG+yLDy6+1VzkP2BXBmyBbqLOX0Ou73Fi+oEvcvPI0YiAbBFDSVK/q7ge4z4LyD5iJEMxQu2Q7t6e9rQNYRI=) 2026-03-17 00:24:55.272054 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/D5TjooFlHXZXYmYUfpq79LGgxJhh360pq4q5LSQOUqHOExfAtdMPUZVdMsNd4n9YWgJZvZdpohO9Wzbq+QvlIaDnqOV3RSYt70sokJvdhYofY2Ag3iWLwOStRP1OGI7mmN3MA4u9SK15lS86yTo3Et/2Dcq2UGmqhYZwtCz/LGyiqbk4fufAKZwnTxPrNu80rM6+/v8Fi0zwqf0jD0npLGMyX0NMiod32vEFlWHqEEcRqTUvdUBk7NS1CORIIWEzZEx3XQDp0TObjI1tqoISughoex/hcsMQIHqEhCRFdz6Z7TqRABJXXGyZKnkU7nqUZZjsytSxLuPqIORBOKMuEuQ6Cb4n88zZfqVz5cuSYDpazTNykH4DsRWTR4YKtJ2meiGjwZvVoRTUvAJrnWf4ieYNsdomMthdaqgUpk7EK8lZJ6ltpz7kT9SJkZ8LH4ZSxovwpY0c9vHjN2e5TfsNxgKofw0Pj8psDv4NghdzAmTQ/l/YWsy2hOHpczgAg+s=) 2026-03-17 00:24:59.525146 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAuJeqs9Wuy+vSsvJRCcmMnNZdZwcCftdQ9/qPb4xQed) 2026-03-17 00:24:59.525257 | orchestrator | 2026-03-17 00:24:59.525268 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:24:59.525286 | orchestrator | Tuesday 17 March 2026 00:24:55 +0000 (0:00:01.052) 0:00:22.932 ********* 2026-03-17 00:24:59.525992 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLQlJltgf5mXGv6JimZ/z3i+MpDeoFDQubfbz33Yg46LH4nuQBUDpBzFwEyN+0B2GdGzwQaZyveyF749m6QNSCM=) 2026-03-17 00:24:59.526187 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyXOdKpLXlbwsioM+rkEEhK5dar6t/VW2kcXjPBGMXzHqI/vHsyTA/zqOj2fGgiyXHP9LdXOOELUrsjgLziLchIA9Ee4sRJV17UZ4IchCyDJClmuHeKUpZ4Bx0toVGjay3YPfeSihJ9Kd6ovfyg/5lnt67kAZaCQ6gh04GCmFPZxvXJtNeR9d8dbgN6kaa1cIC7usQEfh/OOUj4ZDIJoSZXgGlL/aMsZHnu4kcsiZlBge/JHQ0tbHVciGuiJ2uxBcklH64INx4mX+hWHWvafzorynEoh2n1HRvm1hSBpwM/jyFXQ6ap5o44kMMn+M9GUY6gXURQgMRPljsrjIVZYKE3AGHcKFF8oT024d5Bkmcx1PYdg8fxE9cAVFgxtb8HctBUVoSE5qgBKzURtH/8Yoqqvyb3x6YV0K/64adshKDytsv1y3zygqDV8jHqpXpocNbMSozWClOwLmUieuAgyiYv0jn0hUtZBZsB0UlpeX72g9dpIZgGgJtbWOUQpn48YE=) 2026-03-17 00:24:59.526210 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEcZwbAamS2mGu+2ru+1Ds9ol6UbXwYRT6Qzc//vqtt+) 2026-03-17 00:24:59.526223 | orchestrator | 2026-03-17 00:24:59.526235 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:24:59.526247 | orchestrator | Tuesday 17 March 2026 00:24:56 +0000 (0:00:01.030) 0:00:23.963 ********* 2026-03-17 00:24:59.526258 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChGq2msa5wj+dVLpi+ZanT7lEJRNUd3gibLWTL0YMfPBPo+Sx5aY02DdOWuWdxwv+FW05xDXCza+d37Qxwbk0xT/Zp2TvaRIzCXwVqoQJE0SPqNIByxarPiEQhqpuQvh5rlPjg14mIC+R24M2SJZMoA4WLqBotChEbSZdsW3yq8XXLB3wnfDtn/uedFN4o6NeGNJwwZKP+tLA25uV7OETpgrUgG0hQXG2hNGGqVTSHmNatMtzdg9HjV00/C/7j82nrdE7BRLN+EQIZNIKi8aDuGFa+Veq89eyM9cGIEkdU9ht8uPeA9XiYfBsdZVSoFi/w53n5uNOGIQzA8G8tFUFi9j4acOAXi49F5bFLns9ZWBX2rOU72OXREJXWUdk0RbtiHKTb3lvdGvIyA9conN1kgpSSyHy/cDNKrGkpRx0ElLM1boXnUyyg1/Aslad8e/s6krd1J/sPEdOyE6OULF4Mlzh4GvxDH5Oglkt/AJ23MMZt/s2fJ7guv7F+8ogJo+U=) 2026-03-17 00:24:59.526270 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJdcLd72Yc8mgDxKqO5K8w/sOiTLdn/FrvWX9cQZJce+PhMHfgR2/2m7iY0JM+Mmlv1Lb/eYQptABVJ6LjSoK9c=) 2026-03-17 00:24:59.526281 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJSEZmutBACypBkKmot+FNCnfYStnRLg4glKi7DvDGAY) 2026-03-17 00:24:59.526292 | orchestrator | 2026-03-17 00:24:59.526303 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:24:59.526314 | orchestrator | Tuesday 17 March 2026 00:24:57 +0000 (0:00:01.039) 0:00:25.002 ********* 2026-03-17 00:24:59.526345 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyDwIb42fGeRDDCQJTPXc4mW4wfJYwvFgsYOOQA+SAAWYzNnx02EhojmXdezvgdqYxHZUTMxZxxPYzYhuwf9hL404NyQxorKyYE1coQxXajyhl4mUtZNrgTHSBXVCJfKcXZ5hOgZWy6cjQ2WiPWPgWExj1trQ/GpkHnzj1QgroK8CFRO+agSncKV1ZZNhQRX/mnLEy2BN4a5y5qiiZOtmIHLth0LHCfbcNlKlukz6XufBNOzm+3ksg4MiwY6DBlpVz90PoT9z0SYcnruFMHEFt5iy7YFGOklWFf5+V5TkmcZeaf9aeY1QNc5gEIqZQOBqezl/Ei1tp6d07ejWLhkzBPRH4R+u1STi2HWh40Bgb5yhSnIST2BWaqBSfdBQhVDO+8YVEZckClSstrNHHD2cFLcL8iy3gRDqKPlx37W1Yx6uWkUL4386ECCOqxyaefiqQCSexB2p8mXE0FAYk7ds1IU32LVSbwoOf0089VMd/ufnGl7CiU1H+hZjXDZ0DjVk=) 2026-03-17 00:24:59.526358 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGfxbwoCKgKsY3I4C2hhbHSyfIs1VX+BUkXgNpFIJ6CfbhPWEMRo7QFf6Zgk/d61F8SEm8WNakrZESDUkvZIoP4=) 2026-03-17 00:24:59.526369 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINpANNtYHO1TnRhTU8iUEFOG0xL53dmGysiqR1VPT20s) 2026-03-17 00:24:59.526380 | orchestrator | 2026-03-17 00:24:59.526391 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-17 00:24:59.526420 | orchestrator | Tuesday 17 March 2026 00:24:58 +0000 (0:00:01.024) 0:00:26.027 ********* 2026-03-17 00:24:59.526433 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-17 00:24:59.526444 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-17 00:24:59.526455 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-17 00:24:59.526465 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-17 00:24:59.526476 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-17 00:24:59.526512 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-17 00:24:59.526524 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-17 00:24:59.526535 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:24:59.526546 | orchestrator | 2026-03-17 00:24:59.526557 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-17 00:24:59.526568 | orchestrator | Tuesday 17 March 2026 00:24:58 +0000 (0:00:00.160) 0:00:26.188 ********* 2026-03-17 00:24:59.526579 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:24:59.526590 | orchestrator | 2026-03-17 00:24:59.526601 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-17 00:24:59.526611 | orchestrator | Tuesday 17 March 2026 00:24:58 +0000 (0:00:00.042) 0:00:26.230 ********* 2026-03-17 00:24:59.526627 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:24:59.526639 | orchestrator | 2026-03-17 00:24:59.526649 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-17 00:24:59.526660 | orchestrator | Tuesday 17 March 2026 00:24:58 +0000 (0:00:00.056) 0:00:26.286 ********* 2026-03-17 00:24:59.526671 | orchestrator | changed: [testbed-manager] 2026-03-17 00:24:59.526681 | orchestrator | 2026-03-17 00:24:59.526692 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:24:59.526703 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-17 00:24:59.526715 | orchestrator | 2026-03-17 00:24:59.526725 | orchestrator | 2026-03-17 00:24:59.526736 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:24:59.526746 | orchestrator | Tuesday 17 March 2026 00:24:59 +0000 (0:00:00.728) 0:00:27.014 ********* 2026-03-17 00:24:59.526757 | orchestrator | =============================================================================== 2026-03-17 00:24:59.526767 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.85s 2026-03-17 00:24:59.526778 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.16s 2026-03-17 00:24:59.526790 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-03-17 00:24:59.526801 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-03-17 00:24:59.526811 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-03-17 00:24:59.526822 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-17 00:24:59.526832 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-17 00:24:59.526843 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-17 00:24:59.526853 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-17 00:24:59.526864 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-17 00:24:59.526875 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-17 00:24:59.526885 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-17 00:24:59.526896 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-17 00:24:59.526906 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-17 00:24:59.526917 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-17 00:24:59.526935 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-03-17 00:24:59.526946 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.73s 2026-03-17 00:24:59.526957 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2026-03-17 00:24:59.526967 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2026-03-17 00:24:59.526979 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.15s 2026-03-17 00:24:59.793356 | orchestrator | + osism apply squid 2026-03-17 00:25:11.909297 | orchestrator | 2026-03-17 00:25:11 | INFO  | Task c2e99554-f097-4ddc-9cfa-5e3aaa0167dc (squid) was prepared for execution. 2026-03-17 00:25:11.909428 | orchestrator | 2026-03-17 00:25:11 | INFO  | It takes a moment until task c2e99554-f097-4ddc-9cfa-5e3aaa0167dc (squid) has been started and output is visible here. 2026-03-17 00:27:06.287308 | orchestrator | 2026-03-17 00:27:06.287469 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-17 00:27:06.287491 | orchestrator | 2026-03-17 00:27:06.287503 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-17 00:27:06.287515 | orchestrator | Tuesday 17 March 2026 00:25:15 +0000 (0:00:00.116) 0:00:00.116 ********* 2026-03-17 00:27:06.287527 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-17 00:27:06.287540 | orchestrator | 2026-03-17 00:27:06.287551 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-17 00:27:06.287562 | orchestrator | Tuesday 17 March 2026 00:25:15 +0000 (0:00:00.068) 0:00:00.185 ********* 2026-03-17 00:27:06.287573 | orchestrator | ok: [testbed-manager] 2026-03-17 00:27:06.287585 | orchestrator | 2026-03-17 00:27:06.287596 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-17 00:27:06.287607 | orchestrator | Tuesday 17 March 2026 00:25:16 +0000 (0:00:01.031) 0:00:01.216 ********* 2026-03-17 00:27:06.287635 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-17 00:27:06.287647 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-17 00:27:06.287658 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-17 00:27:06.287669 | orchestrator | 2026-03-17 00:27:06.287680 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-17 00:27:06.287691 | orchestrator | Tuesday 17 March 2026 00:25:17 +0000 (0:00:00.963) 0:00:02.180 ********* 2026-03-17 00:27:06.287702 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-17 00:27:06.287713 | orchestrator | 2026-03-17 00:27:06.287725 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-17 00:27:06.287735 | orchestrator | Tuesday 17 March 2026 00:25:18 +0000 (0:00:00.915) 0:00:03.096 ********* 2026-03-17 00:27:06.287746 | orchestrator | ok: [testbed-manager] 2026-03-17 00:27:06.287757 | orchestrator | 2026-03-17 00:27:06.287768 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-17 00:27:06.287779 | orchestrator | Tuesday 17 March 2026 00:25:19 +0000 (0:00:00.288) 0:00:03.384 ********* 2026-03-17 00:27:06.287790 | orchestrator | changed: [testbed-manager] 2026-03-17 00:27:06.287803 | orchestrator | 2026-03-17 00:27:06.287814 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-17 00:27:06.287825 | orchestrator | Tuesday 17 March 2026 00:25:19 +0000 (0:00:00.803) 0:00:04.188 ********* 2026-03-17 00:27:06.287837 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-17 00:27:06.287848 | orchestrator | ok: [testbed-manager] 2026-03-17 00:27:06.287864 | orchestrator | 2026-03-17 00:27:06.287875 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-17 00:27:06.287886 | orchestrator | Tuesday 17 March 2026 00:25:53 +0000 (0:00:33.484) 0:00:37.673 ********* 2026-03-17 00:27:06.287927 | orchestrator | changed: [testbed-manager] 2026-03-17 00:27:06.287948 | orchestrator | 2026-03-17 00:27:06.287967 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-17 00:27:06.287986 | orchestrator | Tuesday 17 March 2026 00:26:05 +0000 (0:00:11.964) 0:00:49.637 ********* 2026-03-17 00:27:06.288005 | orchestrator | Pausing for 60 seconds 2026-03-17 00:27:06.288022 | orchestrator | changed: [testbed-manager] 2026-03-17 00:27:06.288042 | orchestrator | 2026-03-17 00:27:06.288089 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-17 00:27:06.288109 | orchestrator | Tuesday 17 March 2026 00:27:05 +0000 (0:01:00.089) 0:01:49.727 ********* 2026-03-17 00:27:06.288128 | orchestrator | ok: [testbed-manager] 2026-03-17 00:27:06.288147 | orchestrator | 2026-03-17 00:27:06.288165 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-17 00:27:06.288184 | orchestrator | Tuesday 17 March 2026 00:27:05 +0000 (0:00:00.071) 0:01:49.798 ********* 2026-03-17 00:27:06.288203 | orchestrator | changed: [testbed-manager] 2026-03-17 00:27:06.288222 | orchestrator | 2026-03-17 00:27:06.288242 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:27:06.288260 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:27:06.288280 | orchestrator | 2026-03-17 00:27:06.288291 | orchestrator | 2026-03-17 00:27:06.288302 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:27:06.288312 | orchestrator | Tuesday 17 March 2026 00:27:06 +0000 (0:00:00.594) 0:01:50.393 ********* 2026-03-17 00:27:06.288323 | orchestrator | =============================================================================== 2026-03-17 00:27:06.288353 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-03-17 00:27:06.288364 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 33.49s 2026-03-17 00:27:06.288375 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.96s 2026-03-17 00:27:06.288386 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.03s 2026-03-17 00:27:06.288396 | orchestrator | osism.services.squid : Create required directories ---------------------- 0.96s 2026-03-17 00:27:06.288407 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.92s 2026-03-17 00:27:06.288418 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.80s 2026-03-17 00:27:06.288428 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.59s 2026-03-17 00:27:06.288439 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.29s 2026-03-17 00:27:06.288450 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-03-17 00:27:06.288460 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.07s 2026-03-17 00:27:06.456857 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-17 00:27:06.456948 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-17 00:27:06.502542 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-17 00:27:06.502647 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-03-17 00:27:06.509461 | orchestrator | + set -e 2026-03-17 00:27:06.509994 | orchestrator | + NAMESPACE=kolla/release 2026-03-17 00:27:06.510158 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-17 00:27:06.515949 | orchestrator | ++ semver 9.5.0 9.0.0 2026-03-17 00:27:06.581985 | orchestrator | + [[ 1 -lt 0 ]] 2026-03-17 00:27:06.582709 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-17 00:27:18.558693 | orchestrator | 2026-03-17 00:27:18 | INFO  | Task a9304e82-d74a-4cdf-acc7-131140f7fdd5 (operator) was prepared for execution. 2026-03-17 00:27:18.558812 | orchestrator | 2026-03-17 00:27:18 | INFO  | It takes a moment until task a9304e82-d74a-4cdf-acc7-131140f7fdd5 (operator) has been started and output is visible here. 2026-03-17 00:27:34.995675 | orchestrator | 2026-03-17 00:27:34.995802 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-17 00:27:34.995821 | orchestrator | 2026-03-17 00:27:34.995839 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-17 00:27:34.995858 | orchestrator | Tuesday 17 March 2026 00:27:22 +0000 (0:00:00.101) 0:00:00.101 ********* 2026-03-17 00:27:34.995876 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:27:34.995896 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:27:34.995916 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:27:34.995934 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:27:34.995953 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:27:34.995972 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:27:34.995992 | orchestrator | 2026-03-17 00:27:34.996012 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-17 00:27:34.996026 | orchestrator | Tuesday 17 March 2026 00:27:26 +0000 (0:00:04.275) 0:00:04.376 ********* 2026-03-17 00:27:34.996037 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:27:34.996082 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:27:34.996094 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:27:34.996105 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:27:34.996134 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:27:34.996145 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:27:34.996156 | orchestrator | 2026-03-17 00:27:34.996167 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-17 00:27:34.996179 | orchestrator | 2026-03-17 00:27:34.996198 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-17 00:27:34.996217 | orchestrator | Tuesday 17 March 2026 00:27:27 +0000 (0:00:00.755) 0:00:05.132 ********* 2026-03-17 00:27:34.996237 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:27:34.996257 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:27:34.996276 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:27:34.996295 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:27:34.996313 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:27:34.996332 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:27:34.996351 | orchestrator | 2026-03-17 00:27:34.996371 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-17 00:27:34.996390 | orchestrator | Tuesday 17 March 2026 00:27:27 +0000 (0:00:00.171) 0:00:05.305 ********* 2026-03-17 00:27:34.996409 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:27:34.996427 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:27:34.996446 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:27:34.996464 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:27:34.996482 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:27:34.996501 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:27:34.996513 | orchestrator | 2026-03-17 00:27:34.996524 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-17 00:27:34.996535 | orchestrator | Tuesday 17 March 2026 00:27:27 +0000 (0:00:00.142) 0:00:05.448 ********* 2026-03-17 00:27:34.996546 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:27:34.996559 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:27:34.996570 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:27:34.996581 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:27:34.996591 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:27:34.996602 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:27:34.996613 | orchestrator | 2026-03-17 00:27:34.996624 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-17 00:27:34.996635 | orchestrator | Tuesday 17 March 2026 00:27:28 +0000 (0:00:00.751) 0:00:06.199 ********* 2026-03-17 00:27:34.996646 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:27:34.996657 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:27:34.996667 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:27:34.996678 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:27:34.996689 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:27:34.996699 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:27:34.996710 | orchestrator | 2026-03-17 00:27:34.996721 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-17 00:27:34.996756 | orchestrator | Tuesday 17 March 2026 00:27:29 +0000 (0:00:00.820) 0:00:07.020 ********* 2026-03-17 00:27:34.996768 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-17 00:27:34.996779 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-17 00:27:34.996790 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-17 00:27:34.996800 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-17 00:27:34.996811 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-17 00:27:34.996822 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-17 00:27:34.996841 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-17 00:27:34.996861 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-17 00:27:34.996878 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-17 00:27:34.996896 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-17 00:27:34.996913 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-17 00:27:34.996932 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-17 00:27:34.996951 | orchestrator | 2026-03-17 00:27:34.996968 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-17 00:27:34.996987 | orchestrator | Tuesday 17 March 2026 00:27:30 +0000 (0:00:01.241) 0:00:08.261 ********* 2026-03-17 00:27:34.997006 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:27:34.997021 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:27:34.997032 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:27:34.997042 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:27:34.997085 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:27:34.997096 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:27:34.997107 | orchestrator | 2026-03-17 00:27:34.997118 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-17 00:27:34.997130 | orchestrator | Tuesday 17 March 2026 00:27:31 +0000 (0:00:01.242) 0:00:09.504 ********* 2026-03-17 00:27:34.997141 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-17 00:27:34.997152 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-17 00:27:34.997162 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-17 00:27:34.997173 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-17 00:27:34.997205 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-17 00:27:34.997217 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-17 00:27:34.997227 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-17 00:27:34.997238 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-17 00:27:34.997249 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-17 00:27:34.997260 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-17 00:27:34.997270 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-17 00:27:34.997281 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-17 00:27:34.997292 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-17 00:27:34.997302 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-17 00:27:34.997313 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-17 00:27:34.997324 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-17 00:27:34.997335 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-17 00:27:34.997346 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-17 00:27:34.997356 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-17 00:27:34.997367 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-17 00:27:34.997389 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-17 00:27:34.997400 | orchestrator | 2026-03-17 00:27:34.997411 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-17 00:27:34.997422 | orchestrator | Tuesday 17 March 2026 00:27:32 +0000 (0:00:01.270) 0:00:10.774 ********* 2026-03-17 00:27:34.997433 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:27:34.997450 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:27:34.997469 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:27:34.997487 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:27:34.997505 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:27:34.997523 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:27:34.997543 | orchestrator | 2026-03-17 00:27:34.997562 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-17 00:27:34.997581 | orchestrator | Tuesday 17 March 2026 00:27:33 +0000 (0:00:00.154) 0:00:10.929 ********* 2026-03-17 00:27:34.997593 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:27:34.997603 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:27:34.997614 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:27:34.997624 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:27:34.997635 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:27:34.997645 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:27:34.997655 | orchestrator | 2026-03-17 00:27:34.997666 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-17 00:27:34.997677 | orchestrator | Tuesday 17 March 2026 00:27:33 +0000 (0:00:00.187) 0:00:11.117 ********* 2026-03-17 00:27:34.997688 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:27:34.997698 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:27:34.997709 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:27:34.997719 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:27:34.997729 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:27:34.997745 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:27:34.997764 | orchestrator | 2026-03-17 00:27:34.997779 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-17 00:27:34.997795 | orchestrator | Tuesday 17 March 2026 00:27:33 +0000 (0:00:00.564) 0:00:11.681 ********* 2026-03-17 00:27:34.997806 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:27:34.997817 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:27:34.997827 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:27:34.997838 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:27:34.997861 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:27:34.997872 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:27:34.997883 | orchestrator | 2026-03-17 00:27:34.997894 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-17 00:27:34.997905 | orchestrator | Tuesday 17 March 2026 00:27:33 +0000 (0:00:00.176) 0:00:11.857 ********* 2026-03-17 00:27:34.997916 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-17 00:27:34.997926 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:27:34.997937 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-17 00:27:34.997948 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:27:34.997958 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-17 00:27:34.997969 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-17 00:27:34.997980 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:27:34.997990 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:27:34.998001 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-17 00:27:34.998012 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:27:34.998132 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-17 00:27:34.998145 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:27:34.998160 | orchestrator | 2026-03-17 00:27:34.998179 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-17 00:27:34.998198 | orchestrator | Tuesday 17 March 2026 00:27:34 +0000 (0:00:00.703) 0:00:12.561 ********* 2026-03-17 00:27:34.998231 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:27:34.998252 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:27:34.998272 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:27:34.998287 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:27:34.998298 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:27:34.998308 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:27:34.998319 | orchestrator | 2026-03-17 00:27:34.998329 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-17 00:27:34.998340 | orchestrator | Tuesday 17 March 2026 00:27:34 +0000 (0:00:00.156) 0:00:12.718 ********* 2026-03-17 00:27:34.998351 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:27:34.998362 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:27:34.998372 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:27:34.998383 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:27:34.998405 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:27:36.264912 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:27:36.265029 | orchestrator | 2026-03-17 00:27:36.265089 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-17 00:27:36.265105 | orchestrator | Tuesday 17 March 2026 00:27:34 +0000 (0:00:00.143) 0:00:12.861 ********* 2026-03-17 00:27:36.265117 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:27:36.265128 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:27:36.265138 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:27:36.265149 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:27:36.265160 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:27:36.265171 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:27:36.265181 | orchestrator | 2026-03-17 00:27:36.265192 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-17 00:27:36.265203 | orchestrator | Tuesday 17 March 2026 00:27:35 +0000 (0:00:00.146) 0:00:13.008 ********* 2026-03-17 00:27:36.265214 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:27:36.265225 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:27:36.265235 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:27:36.265268 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:27:36.265279 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:27:36.265290 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:27:36.265301 | orchestrator | 2026-03-17 00:27:36.265311 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-17 00:27:36.265322 | orchestrator | Tuesday 17 March 2026 00:27:35 +0000 (0:00:00.666) 0:00:13.675 ********* 2026-03-17 00:27:36.265333 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:27:36.265344 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:27:36.265354 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:27:36.265365 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:27:36.265377 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:27:36.265396 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:27:36.265414 | orchestrator | 2026-03-17 00:27:36.265432 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:27:36.265453 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 00:27:36.265477 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 00:27:36.265501 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 00:27:36.265522 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 00:27:36.265539 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 00:27:36.265574 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 00:27:36.265588 | orchestrator | 2026-03-17 00:27:36.265600 | orchestrator | 2026-03-17 00:27:36.265613 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:27:36.265626 | orchestrator | Tuesday 17 March 2026 00:27:36 +0000 (0:00:00.242) 0:00:13.917 ********* 2026-03-17 00:27:36.265638 | orchestrator | =============================================================================== 2026-03-17 00:27:36.265650 | orchestrator | Gathering Facts --------------------------------------------------------- 4.28s 2026-03-17 00:27:36.265662 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.27s 2026-03-17 00:27:36.265676 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.24s 2026-03-17 00:27:36.265688 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.24s 2026-03-17 00:27:36.265700 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.82s 2026-03-17 00:27:36.265712 | orchestrator | Do not require tty for all users ---------------------------------------- 0.76s 2026-03-17 00:27:36.265724 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.75s 2026-03-17 00:27:36.265736 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.70s 2026-03-17 00:27:36.265748 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.67s 2026-03-17 00:27:36.265760 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.56s 2026-03-17 00:27:36.265772 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.24s 2026-03-17 00:27:36.265784 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.19s 2026-03-17 00:27:36.265796 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2026-03-17 00:27:36.265809 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2026-03-17 00:27:36.265819 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2026-03-17 00:27:36.265830 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.15s 2026-03-17 00:27:36.265840 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2026-03-17 00:27:36.265851 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.14s 2026-03-17 00:27:36.265862 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.14s 2026-03-17 00:27:36.512104 | orchestrator | + osism apply --environment custom facts 2026-03-17 00:27:38.520615 | orchestrator | 2026-03-17 00:27:38 | INFO  | Trying to run play facts in environment custom 2026-03-17 00:27:48.634375 | orchestrator | 2026-03-17 00:27:48 | INFO  | Task a68027f9-ecdc-4919-bc1c-c9c7ed412f4f (facts) was prepared for execution. 2026-03-17 00:27:48.634479 | orchestrator | 2026-03-17 00:27:48 | INFO  | It takes a moment until task a68027f9-ecdc-4919-bc1c-c9c7ed412f4f (facts) has been started and output is visible here. 2026-03-17 00:28:33.913985 | orchestrator | 2026-03-17 00:28:33.914230 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-17 00:28:33.914250 | orchestrator | 2026-03-17 00:28:33.914261 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-17 00:28:33.914272 | orchestrator | Tuesday 17 March 2026 00:27:52 +0000 (0:00:00.072) 0:00:00.072 ********* 2026-03-17 00:28:33.914282 | orchestrator | ok: [testbed-manager] 2026-03-17 00:28:33.914293 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:28:33.914304 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:28:33.914313 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:28:33.914323 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:28:33.914332 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:28:33.914342 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:28:33.914375 | orchestrator | 2026-03-17 00:28:33.914386 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-17 00:28:33.914396 | orchestrator | Tuesday 17 March 2026 00:27:53 +0000 (0:00:01.337) 0:00:01.410 ********* 2026-03-17 00:28:33.914405 | orchestrator | ok: [testbed-manager] 2026-03-17 00:28:33.914415 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:28:33.914424 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:28:33.914434 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:28:33.914443 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:28:33.914452 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:28:33.914462 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:28:33.914471 | orchestrator | 2026-03-17 00:28:33.914481 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-17 00:28:33.914492 | orchestrator | 2026-03-17 00:28:33.914503 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-17 00:28:33.914517 | orchestrator | Tuesday 17 March 2026 00:27:54 +0000 (0:00:01.220) 0:00:02.630 ********* 2026-03-17 00:28:33.914535 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:28:33.914547 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:28:33.914558 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:28:33.914569 | orchestrator | 2026-03-17 00:28:33.914579 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-17 00:28:33.914591 | orchestrator | Tuesday 17 March 2026 00:27:54 +0000 (0:00:00.069) 0:00:02.700 ********* 2026-03-17 00:28:33.914602 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:28:33.914613 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:28:33.914624 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:28:33.914635 | orchestrator | 2026-03-17 00:28:33.914646 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-17 00:28:33.914657 | orchestrator | Tuesday 17 March 2026 00:27:55 +0000 (0:00:00.170) 0:00:02.871 ********* 2026-03-17 00:28:33.914668 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:28:33.914678 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:28:33.914689 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:28:33.914700 | orchestrator | 2026-03-17 00:28:33.914711 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-17 00:28:33.914722 | orchestrator | Tuesday 17 March 2026 00:27:55 +0000 (0:00:00.161) 0:00:03.032 ********* 2026-03-17 00:28:33.914735 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:28:33.914748 | orchestrator | 2026-03-17 00:28:33.914759 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-17 00:28:33.914770 | orchestrator | Tuesday 17 March 2026 00:27:55 +0000 (0:00:00.104) 0:00:03.137 ********* 2026-03-17 00:28:33.914780 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:28:33.914791 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:28:33.914802 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:28:33.914812 | orchestrator | 2026-03-17 00:28:33.914824 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-17 00:28:33.914834 | orchestrator | Tuesday 17 March 2026 00:27:55 +0000 (0:00:00.449) 0:00:03.587 ********* 2026-03-17 00:28:33.914851 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:28:33.914865 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:28:33.914877 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:28:33.914886 | orchestrator | 2026-03-17 00:28:33.914896 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-17 00:28:33.914906 | orchestrator | Tuesday 17 March 2026 00:27:55 +0000 (0:00:00.091) 0:00:03.678 ********* 2026-03-17 00:28:33.914916 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:28:33.914925 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:28:33.914934 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:28:33.914944 | orchestrator | 2026-03-17 00:28:33.914954 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-17 00:28:33.914971 | orchestrator | Tuesday 17 March 2026 00:27:56 +0000 (0:00:01.039) 0:00:04.718 ********* 2026-03-17 00:28:33.914981 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:28:33.914990 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:28:33.915000 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:28:33.915009 | orchestrator | 2026-03-17 00:28:33.915019 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-17 00:28:33.915109 | orchestrator | Tuesday 17 March 2026 00:27:57 +0000 (0:00:00.464) 0:00:05.183 ********* 2026-03-17 00:28:33.915122 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:28:33.915132 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:28:33.915141 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:28:33.915164 | orchestrator | 2026-03-17 00:28:33.915184 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-17 00:28:33.915195 | orchestrator | Tuesday 17 March 2026 00:27:58 +0000 (0:00:01.203) 0:00:06.386 ********* 2026-03-17 00:28:33.915205 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:28:33.915214 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:28:33.915224 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:28:33.915233 | orchestrator | 2026-03-17 00:28:33.915243 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-17 00:28:33.915253 | orchestrator | Tuesday 17 March 2026 00:28:15 +0000 (0:00:16.976) 0:00:23.363 ********* 2026-03-17 00:28:33.915262 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:28:33.915271 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:28:33.915281 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:28:33.915290 | orchestrator | 2026-03-17 00:28:33.915300 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-17 00:28:33.915327 | orchestrator | Tuesday 17 March 2026 00:28:15 +0000 (0:00:00.084) 0:00:23.448 ********* 2026-03-17 00:28:33.915338 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:28:33.915348 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:28:33.915357 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:28:33.915366 | orchestrator | 2026-03-17 00:28:33.915376 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-17 00:28:33.915390 | orchestrator | Tuesday 17 March 2026 00:28:24 +0000 (0:00:08.928) 0:00:32.376 ********* 2026-03-17 00:28:33.915400 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:28:33.915410 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:28:33.915419 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:28:33.915429 | orchestrator | 2026-03-17 00:28:33.915438 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-17 00:28:33.915448 | orchestrator | Tuesday 17 March 2026 00:28:25 +0000 (0:00:00.482) 0:00:32.859 ********* 2026-03-17 00:28:33.915457 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-17 00:28:33.915467 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-17 00:28:33.915477 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-17 00:28:33.915486 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-17 00:28:33.915495 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-17 00:28:33.915505 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-17 00:28:33.915514 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-17 00:28:33.915523 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-17 00:28:33.915533 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-17 00:28:33.915543 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-17 00:28:33.915552 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-17 00:28:33.915562 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-17 00:28:33.915571 | orchestrator | 2026-03-17 00:28:33.915581 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-17 00:28:33.915598 | orchestrator | Tuesday 17 March 2026 00:28:28 +0000 (0:00:03.589) 0:00:36.448 ********* 2026-03-17 00:28:33.915607 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:28:33.915617 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:28:33.915626 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:28:33.915636 | orchestrator | 2026-03-17 00:28:33.915646 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-17 00:28:33.915655 | orchestrator | 2026-03-17 00:28:33.915665 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-17 00:28:33.915674 | orchestrator | Tuesday 17 March 2026 00:28:30 +0000 (0:00:01.504) 0:00:37.952 ********* 2026-03-17 00:28:33.915684 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:28:33.915693 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:28:33.915702 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:28:33.915712 | orchestrator | ok: [testbed-manager] 2026-03-17 00:28:33.915721 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:28:33.915731 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:28:33.915740 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:28:33.915750 | orchestrator | 2026-03-17 00:28:33.915760 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:28:33.915770 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:28:33.915781 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:28:33.915792 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:28:33.915801 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:28:33.915811 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:28:33.915821 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:28:33.915830 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:28:33.915840 | orchestrator | 2026-03-17 00:28:33.915849 | orchestrator | 2026-03-17 00:28:33.915859 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:28:33.915869 | orchestrator | Tuesday 17 March 2026 00:28:33 +0000 (0:00:03.766) 0:00:41.719 ********* 2026-03-17 00:28:33.915878 | orchestrator | =============================================================================== 2026-03-17 00:28:33.915888 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.98s 2026-03-17 00:28:33.915897 | orchestrator | Install required packages (Debian) -------------------------------------- 8.93s 2026-03-17 00:28:33.915907 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.77s 2026-03-17 00:28:33.915916 | orchestrator | Copy fact files --------------------------------------------------------- 3.59s 2026-03-17 00:28:33.915925 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.50s 2026-03-17 00:28:33.915935 | orchestrator | Create custom facts directory ------------------------------------------- 1.34s 2026-03-17 00:28:33.915950 | orchestrator | Copy fact file ---------------------------------------------------------- 1.22s 2026-03-17 00:28:34.125552 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.20s 2026-03-17 00:28:34.125639 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.04s 2026-03-17 00:28:34.125668 | orchestrator | Create custom facts directory ------------------------------------------- 0.48s 2026-03-17 00:28:34.125679 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2026-03-17 00:28:34.125707 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.45s 2026-03-17 00:28:34.125716 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.17s 2026-03-17 00:28:34.125725 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.16s 2026-03-17 00:28:34.125734 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.10s 2026-03-17 00:28:34.125744 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.09s 2026-03-17 00:28:34.125753 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.08s 2026-03-17 00:28:34.125762 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.07s 2026-03-17 00:28:34.379871 | orchestrator | + osism apply bootstrap 2026-03-17 00:28:46.345940 | orchestrator | 2026-03-17 00:28:46 | INFO  | Task d1b7233e-80f4-4a88-a3e8-0232ddb18fcd (bootstrap) was prepared for execution. 2026-03-17 00:28:46.346157 | orchestrator | 2026-03-17 00:28:46 | INFO  | It takes a moment until task d1b7233e-80f4-4a88-a3e8-0232ddb18fcd (bootstrap) has been started and output is visible here. 2026-03-17 00:29:01.798326 | orchestrator | 2026-03-17 00:29:01.798419 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-17 00:29:01.798438 | orchestrator | 2026-03-17 00:29:01.798454 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-17 00:29:01.798469 | orchestrator | Tuesday 17 March 2026 00:28:50 +0000 (0:00:00.114) 0:00:00.114 ********* 2026-03-17 00:29:01.798484 | orchestrator | ok: [testbed-manager] 2026-03-17 00:29:01.798496 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:29:01.798504 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:29:01.798512 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:29:01.798520 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:29:01.798527 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:29:01.798535 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:29:01.798543 | orchestrator | 2026-03-17 00:29:01.798552 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-17 00:29:01.798560 | orchestrator | 2026-03-17 00:29:01.798568 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-17 00:29:01.798576 | orchestrator | Tuesday 17 March 2026 00:28:50 +0000 (0:00:00.173) 0:00:00.288 ********* 2026-03-17 00:29:01.798584 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:29:01.798592 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:29:01.798600 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:29:01.798608 | orchestrator | ok: [testbed-manager] 2026-03-17 00:29:01.798616 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:29:01.798623 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:29:01.798631 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:29:01.798639 | orchestrator | 2026-03-17 00:29:01.798647 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-17 00:29:01.798655 | orchestrator | 2026-03-17 00:29:01.798663 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-17 00:29:01.798671 | orchestrator | Tuesday 17 March 2026 00:28:54 +0000 (0:00:03.762) 0:00:04.051 ********* 2026-03-17 00:29:01.798680 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-17 00:29:01.798688 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-17 00:29:01.798696 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-17 00:29:01.798703 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-17 00:29:01.798711 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 00:29:01.798719 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 00:29:01.798727 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-17 00:29:01.798736 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 00:29:01.798745 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-17 00:29:01.798775 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-17 00:29:01.798784 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-17 00:29:01.798793 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-17 00:29:01.798802 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-17 00:29:01.798810 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-17 00:29:01.798819 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-17 00:29:01.798827 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:29:01.798837 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-17 00:29:01.798845 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-17 00:29:01.798854 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-17 00:29:01.798863 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-17 00:29:01.798873 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-17 00:29:01.798884 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-17 00:29:01.798894 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:29:01.798903 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-17 00:29:01.798913 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-17 00:29:01.798923 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-17 00:29:01.798933 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-17 00:29:01.798943 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-17 00:29:01.798953 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-17 00:29:01.798963 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-17 00:29:01.798973 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-17 00:29:01.798982 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-17 00:29:01.798992 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-17 00:29:01.799002 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-17 00:29:01.799012 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-17 00:29:01.799052 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-17 00:29:01.799063 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-17 00:29:01.799072 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-17 00:29:01.799082 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-17 00:29:01.799092 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-17 00:29:01.799102 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-17 00:29:01.799111 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-17 00:29:01.799121 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-17 00:29:01.799131 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-17 00:29:01.799141 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:29:01.799151 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-17 00:29:01.799177 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-17 00:29:01.799188 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-17 00:29:01.799198 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-17 00:29:01.799225 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:29:01.799234 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-17 00:29:01.799242 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:29:01.799251 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-17 00:29:01.799260 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:29:01.799268 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-17 00:29:01.799284 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:29:01.799293 | orchestrator | 2026-03-17 00:29:01.799302 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-17 00:29:01.799310 | orchestrator | 2026-03-17 00:29:01.799319 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-17 00:29:01.799327 | orchestrator | Tuesday 17 March 2026 00:28:54 +0000 (0:00:00.351) 0:00:04.403 ********* 2026-03-17 00:29:01.799336 | orchestrator | ok: [testbed-manager] 2026-03-17 00:29:01.799345 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:29:01.799354 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:29:01.799367 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:29:01.799381 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:29:01.799390 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:29:01.799398 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:29:01.799407 | orchestrator | 2026-03-17 00:29:01.799415 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-17 00:29:01.799425 | orchestrator | Tuesday 17 March 2026 00:28:55 +0000 (0:00:01.184) 0:00:05.587 ********* 2026-03-17 00:29:01.799433 | orchestrator | ok: [testbed-manager] 2026-03-17 00:29:01.799442 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:29:01.799450 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:29:01.799458 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:29:01.799467 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:29:01.799475 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:29:01.799484 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:29:01.799492 | orchestrator | 2026-03-17 00:29:01.799501 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-17 00:29:01.799509 | orchestrator | Tuesday 17 March 2026 00:28:56 +0000 (0:00:01.223) 0:00:06.810 ********* 2026-03-17 00:29:01.799519 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:29:01.799530 | orchestrator | 2026-03-17 00:29:01.799539 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-17 00:29:01.799548 | orchestrator | Tuesday 17 March 2026 00:28:57 +0000 (0:00:00.260) 0:00:07.071 ********* 2026-03-17 00:29:01.799556 | orchestrator | changed: [testbed-manager] 2026-03-17 00:29:01.799565 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:29:01.799573 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:29:01.799582 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:29:01.799590 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:29:01.799599 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:29:01.799607 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:29:01.799615 | orchestrator | 2026-03-17 00:29:01.799624 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-17 00:29:01.799633 | orchestrator | Tuesday 17 March 2026 00:28:59 +0000 (0:00:02.111) 0:00:09.182 ********* 2026-03-17 00:29:01.799641 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:29:01.799651 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:29:01.799661 | orchestrator | 2026-03-17 00:29:01.799670 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-17 00:29:01.799678 | orchestrator | Tuesday 17 March 2026 00:28:59 +0000 (0:00:00.250) 0:00:09.432 ********* 2026-03-17 00:29:01.799687 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:29:01.799696 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:29:01.799704 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:29:01.799713 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:29:01.799721 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:29:01.799729 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:29:01.799738 | orchestrator | 2026-03-17 00:29:01.799753 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-17 00:29:01.799766 | orchestrator | Tuesday 17 March 2026 00:29:00 +0000 (0:00:01.022) 0:00:10.455 ********* 2026-03-17 00:29:01.799774 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:29:01.799783 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:29:01.799791 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:29:01.799800 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:29:01.799808 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:29:01.799817 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:29:01.799825 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:29:01.799833 | orchestrator | 2026-03-17 00:29:01.799842 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-17 00:29:01.799851 | orchestrator | Tuesday 17 March 2026 00:29:01 +0000 (0:00:00.633) 0:00:11.088 ********* 2026-03-17 00:29:01.799859 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:29:01.799868 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:29:01.799876 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:29:01.799884 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:29:01.799893 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:29:01.799901 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:29:01.799909 | orchestrator | ok: [testbed-manager] 2026-03-17 00:29:01.799918 | orchestrator | 2026-03-17 00:29:01.799927 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-17 00:29:01.799936 | orchestrator | Tuesday 17 March 2026 00:29:01 +0000 (0:00:00.429) 0:00:11.518 ********* 2026-03-17 00:29:01.799944 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:29:01.799953 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:29:01.799967 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:29:15.354116 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:29:15.354222 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:29:15.354236 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:29:15.354246 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:29:15.354257 | orchestrator | 2026-03-17 00:29:15.354268 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-17 00:29:15.354280 | orchestrator | Tuesday 17 March 2026 00:29:01 +0000 (0:00:00.262) 0:00:11.780 ********* 2026-03-17 00:29:15.354291 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:29:15.354352 | orchestrator | 2026-03-17 00:29:15.354363 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-17 00:29:15.354375 | orchestrator | Tuesday 17 March 2026 00:29:02 +0000 (0:00:00.307) 0:00:12.088 ********* 2026-03-17 00:29:15.354385 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:29:15.354395 | orchestrator | 2026-03-17 00:29:15.354406 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-17 00:29:15.354415 | orchestrator | Tuesday 17 March 2026 00:29:02 +0000 (0:00:00.348) 0:00:12.436 ********* 2026-03-17 00:29:15.354425 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:29:15.354436 | orchestrator | ok: [testbed-manager] 2026-03-17 00:29:15.354446 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:29:15.354455 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:29:15.354465 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:29:15.354475 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:29:15.354484 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:29:15.354494 | orchestrator | 2026-03-17 00:29:15.354504 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-17 00:29:15.354514 | orchestrator | Tuesday 17 March 2026 00:29:04 +0000 (0:00:01.602) 0:00:14.039 ********* 2026-03-17 00:29:15.354548 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:29:15.354561 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:29:15.354572 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:29:15.354582 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:29:15.354593 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:29:15.354604 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:29:15.354614 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:29:15.354625 | orchestrator | 2026-03-17 00:29:15.354637 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-17 00:29:15.354647 | orchestrator | Tuesday 17 March 2026 00:29:04 +0000 (0:00:00.221) 0:00:14.260 ********* 2026-03-17 00:29:15.354658 | orchestrator | ok: [testbed-manager] 2026-03-17 00:29:15.354669 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:29:15.354679 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:29:15.354691 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:29:15.354702 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:29:15.354712 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:29:15.354721 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:29:15.354730 | orchestrator | 2026-03-17 00:29:15.354740 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-17 00:29:15.354750 | orchestrator | Tuesday 17 March 2026 00:29:04 +0000 (0:00:00.574) 0:00:14.835 ********* 2026-03-17 00:29:15.354760 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:29:15.354769 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:29:15.354779 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:29:15.354788 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:29:15.354798 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:29:15.354807 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:29:15.354817 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:29:15.354827 | orchestrator | 2026-03-17 00:29:15.354837 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-17 00:29:15.354847 | orchestrator | Tuesday 17 March 2026 00:29:05 +0000 (0:00:00.342) 0:00:15.177 ********* 2026-03-17 00:29:15.354857 | orchestrator | ok: [testbed-manager] 2026-03-17 00:29:15.354867 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:29:15.354876 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:29:15.354886 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:29:15.354895 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:29:15.354904 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:29:15.354914 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:29:15.354923 | orchestrator | 2026-03-17 00:29:15.354946 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-17 00:29:15.354956 | orchestrator | Tuesday 17 March 2026 00:29:05 +0000 (0:00:00.626) 0:00:15.804 ********* 2026-03-17 00:29:15.354966 | orchestrator | ok: [testbed-manager] 2026-03-17 00:29:15.354975 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:29:15.354985 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:29:15.354994 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:29:15.355004 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:29:15.355032 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:29:15.355041 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:29:15.355051 | orchestrator | 2026-03-17 00:29:15.355061 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-17 00:29:15.355070 | orchestrator | Tuesday 17 March 2026 00:29:07 +0000 (0:00:01.258) 0:00:17.063 ********* 2026-03-17 00:29:15.355080 | orchestrator | ok: [testbed-manager] 2026-03-17 00:29:15.355090 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:29:15.355099 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:29:15.355109 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:29:15.355118 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:29:15.355127 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:29:15.355137 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:29:15.355146 | orchestrator | 2026-03-17 00:29:15.355156 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-17 00:29:15.355173 | orchestrator | Tuesday 17 March 2026 00:29:08 +0000 (0:00:01.060) 0:00:18.123 ********* 2026-03-17 00:29:15.355202 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:29:15.355213 | orchestrator | 2026-03-17 00:29:15.355223 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-17 00:29:15.355232 | orchestrator | Tuesday 17 March 2026 00:29:08 +0000 (0:00:00.300) 0:00:18.423 ********* 2026-03-17 00:29:15.355242 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:29:15.355251 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:29:15.355261 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:29:15.355270 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:29:15.355280 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:29:15.355289 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:29:15.355298 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:29:15.355308 | orchestrator | 2026-03-17 00:29:15.355317 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-17 00:29:15.355327 | orchestrator | Tuesday 17 March 2026 00:29:10 +0000 (0:00:02.296) 0:00:20.719 ********* 2026-03-17 00:29:15.355336 | orchestrator | ok: [testbed-manager] 2026-03-17 00:29:15.355346 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:29:15.355355 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:29:15.355365 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:29:15.355374 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:29:15.355384 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:29:15.355393 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:29:15.355403 | orchestrator | 2026-03-17 00:29:15.355412 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-17 00:29:15.355422 | orchestrator | Tuesday 17 March 2026 00:29:11 +0000 (0:00:00.217) 0:00:20.937 ********* 2026-03-17 00:29:15.355431 | orchestrator | ok: [testbed-manager] 2026-03-17 00:29:15.355441 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:29:15.355450 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:29:15.355459 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:29:15.355468 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:29:15.355478 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:29:15.355487 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:29:15.355496 | orchestrator | 2026-03-17 00:29:15.355506 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-17 00:29:15.355515 | orchestrator | Tuesday 17 March 2026 00:29:11 +0000 (0:00:00.213) 0:00:21.150 ********* 2026-03-17 00:29:15.355525 | orchestrator | ok: [testbed-manager] 2026-03-17 00:29:15.355534 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:29:15.355543 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:29:15.355553 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:29:15.355562 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:29:15.355571 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:29:15.355580 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:29:15.355590 | orchestrator | 2026-03-17 00:29:15.355599 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-17 00:29:15.355609 | orchestrator | Tuesday 17 March 2026 00:29:11 +0000 (0:00:00.196) 0:00:21.347 ********* 2026-03-17 00:29:15.355619 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:29:15.355630 | orchestrator | 2026-03-17 00:29:15.355640 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-17 00:29:15.355649 | orchestrator | Tuesday 17 March 2026 00:29:11 +0000 (0:00:00.262) 0:00:21.609 ********* 2026-03-17 00:29:15.355659 | orchestrator | ok: [testbed-manager] 2026-03-17 00:29:15.355668 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:29:15.355684 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:29:15.355694 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:29:15.355703 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:29:15.355712 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:29:15.355722 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:29:15.355731 | orchestrator | 2026-03-17 00:29:15.355741 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-17 00:29:15.355751 | orchestrator | Tuesday 17 March 2026 00:29:12 +0000 (0:00:00.542) 0:00:22.151 ********* 2026-03-17 00:29:15.355760 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:29:15.355770 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:29:15.355779 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:29:15.355788 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:29:15.355798 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:29:15.355807 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:29:15.355816 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:29:15.355826 | orchestrator | 2026-03-17 00:29:15.355836 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-17 00:29:15.355845 | orchestrator | Tuesday 17 March 2026 00:29:12 +0000 (0:00:00.217) 0:00:22.369 ********* 2026-03-17 00:29:15.355855 | orchestrator | ok: [testbed-manager] 2026-03-17 00:29:15.355864 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:29:15.355874 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:29:15.355883 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:29:15.355893 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:29:15.355902 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:29:15.355911 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:29:15.355921 | orchestrator | 2026-03-17 00:29:15.355930 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-17 00:29:15.355940 | orchestrator | Tuesday 17 March 2026 00:29:13 +0000 (0:00:01.114) 0:00:23.484 ********* 2026-03-17 00:29:15.355949 | orchestrator | ok: [testbed-manager] 2026-03-17 00:29:15.355958 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:29:15.355968 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:29:15.355977 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:29:15.355995 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:29:15.356004 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:29:15.356029 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:29:15.356039 | orchestrator | 2026-03-17 00:29:15.356049 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-17 00:29:15.356058 | orchestrator | Tuesday 17 March 2026 00:29:14 +0000 (0:00:00.577) 0:00:24.062 ********* 2026-03-17 00:29:15.356068 | orchestrator | ok: [testbed-manager] 2026-03-17 00:29:15.356077 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:29:15.356086 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:29:15.356096 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:29:15.356112 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:29:55.233189 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:29:55.233318 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:29:55.233346 | orchestrator | 2026-03-17 00:29:55.233368 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-17 00:29:55.233382 | orchestrator | Tuesday 17 March 2026 00:29:15 +0000 (0:00:01.182) 0:00:25.244 ********* 2026-03-17 00:29:55.233393 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:29:55.233405 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:29:55.233416 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:29:55.233427 | orchestrator | changed: [testbed-manager] 2026-03-17 00:29:55.233438 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:29:55.233449 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:29:55.233460 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:29:55.233471 | orchestrator | 2026-03-17 00:29:55.233482 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-17 00:29:55.233493 | orchestrator | Tuesday 17 March 2026 00:29:32 +0000 (0:00:16.794) 0:00:42.038 ********* 2026-03-17 00:29:55.233503 | orchestrator | ok: [testbed-manager] 2026-03-17 00:29:55.233537 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:29:55.233548 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:29:55.233559 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:29:55.233569 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:29:55.233580 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:29:55.233590 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:29:55.233601 | orchestrator | 2026-03-17 00:29:55.233612 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-17 00:29:55.233623 | orchestrator | Tuesday 17 March 2026 00:29:32 +0000 (0:00:00.259) 0:00:42.298 ********* 2026-03-17 00:29:55.233633 | orchestrator | ok: [testbed-manager] 2026-03-17 00:29:55.233646 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:29:55.233657 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:29:55.233670 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:29:55.233682 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:29:55.233694 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:29:55.233706 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:29:55.233718 | orchestrator | 2026-03-17 00:29:55.233731 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-17 00:29:55.233743 | orchestrator | Tuesday 17 March 2026 00:29:32 +0000 (0:00:00.222) 0:00:42.521 ********* 2026-03-17 00:29:55.233755 | orchestrator | ok: [testbed-manager] 2026-03-17 00:29:55.233767 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:29:55.233780 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:29:55.233792 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:29:55.233803 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:29:55.233815 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:29:55.233829 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:29:55.233849 | orchestrator | 2026-03-17 00:29:55.233870 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-17 00:29:55.233889 | orchestrator | Tuesday 17 March 2026 00:29:32 +0000 (0:00:00.200) 0:00:42.721 ********* 2026-03-17 00:29:55.233908 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:29:55.233924 | orchestrator | 2026-03-17 00:29:55.233936 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-17 00:29:55.233949 | orchestrator | Tuesday 17 March 2026 00:29:33 +0000 (0:00:00.238) 0:00:42.960 ********* 2026-03-17 00:29:55.233961 | orchestrator | ok: [testbed-manager] 2026-03-17 00:29:55.233973 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:29:55.233985 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:29:55.234084 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:29:55.234096 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:29:55.234107 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:29:55.234127 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:29:55.234138 | orchestrator | 2026-03-17 00:29:55.234149 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-17 00:29:55.234160 | orchestrator | Tuesday 17 March 2026 00:29:35 +0000 (0:00:02.158) 0:00:45.118 ********* 2026-03-17 00:29:55.234171 | orchestrator | changed: [testbed-manager] 2026-03-17 00:29:55.234182 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:29:55.234193 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:29:55.234203 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:29:55.234214 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:29:55.234224 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:29:55.234235 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:29:55.234246 | orchestrator | 2026-03-17 00:29:55.234256 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-17 00:29:55.234267 | orchestrator | Tuesday 17 March 2026 00:29:36 +0000 (0:00:01.062) 0:00:46.181 ********* 2026-03-17 00:29:55.234293 | orchestrator | ok: [testbed-manager] 2026-03-17 00:29:55.234304 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:29:55.234315 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:29:55.234325 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:29:55.234346 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:29:55.234357 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:29:55.234367 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:29:55.234378 | orchestrator | 2026-03-17 00:29:55.234389 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-17 00:29:55.234400 | orchestrator | Tuesday 17 March 2026 00:29:37 +0000 (0:00:00.851) 0:00:47.032 ********* 2026-03-17 00:29:55.234411 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:29:55.234424 | orchestrator | 2026-03-17 00:29:55.234435 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-17 00:29:55.234446 | orchestrator | Tuesday 17 March 2026 00:29:37 +0000 (0:00:00.267) 0:00:47.300 ********* 2026-03-17 00:29:55.234457 | orchestrator | changed: [testbed-manager] 2026-03-17 00:29:55.234468 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:29:55.234478 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:29:55.234489 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:29:55.234500 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:29:55.234510 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:29:55.234520 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:29:55.234531 | orchestrator | 2026-03-17 00:29:55.234560 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-17 00:29:55.234572 | orchestrator | Tuesday 17 March 2026 00:29:38 +0000 (0:00:00.997) 0:00:48.297 ********* 2026-03-17 00:29:55.234583 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:29:55.234594 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:29:55.234604 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:29:55.234615 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:29:55.234625 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:29:55.234636 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:29:55.234646 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:29:55.234657 | orchestrator | 2026-03-17 00:29:55.234668 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-17 00:29:55.234678 | orchestrator | Tuesday 17 March 2026 00:29:38 +0000 (0:00:00.210) 0:00:48.508 ********* 2026-03-17 00:29:55.234690 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:29:55.234701 | orchestrator | 2026-03-17 00:29:55.234711 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-17 00:29:55.234722 | orchestrator | Tuesday 17 March 2026 00:29:38 +0000 (0:00:00.290) 0:00:48.799 ********* 2026-03-17 00:29:55.234732 | orchestrator | ok: [testbed-manager] 2026-03-17 00:29:55.234743 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:29:55.234754 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:29:55.234764 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:29:55.234775 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:29:55.234785 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:29:55.234796 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:29:55.234806 | orchestrator | 2026-03-17 00:29:55.234817 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-17 00:29:55.234828 | orchestrator | Tuesday 17 March 2026 00:29:40 +0000 (0:00:01.922) 0:00:50.721 ********* 2026-03-17 00:29:55.234839 | orchestrator | changed: [testbed-manager] 2026-03-17 00:29:55.234849 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:29:55.234863 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:29:55.234883 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:29:55.234902 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:29:55.234921 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:29:55.234933 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:29:55.234943 | orchestrator | 2026-03-17 00:29:55.234962 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-17 00:29:55.234973 | orchestrator | Tuesday 17 March 2026 00:29:42 +0000 (0:00:01.235) 0:00:51.957 ********* 2026-03-17 00:29:55.234984 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:29:55.235060 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:29:55.235075 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:29:55.235086 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:29:55.235102 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:29:55.235121 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:29:55.235143 | orchestrator | changed: [testbed-manager] 2026-03-17 00:29:55.235163 | orchestrator | 2026-03-17 00:29:55.235182 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-17 00:29:55.235203 | orchestrator | Tuesday 17 March 2026 00:29:52 +0000 (0:00:10.734) 0:01:02.692 ********* 2026-03-17 00:29:55.235223 | orchestrator | ok: [testbed-manager] 2026-03-17 00:29:55.235242 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:29:55.235262 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:29:55.235281 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:29:55.235300 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:29:55.235319 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:29:55.235340 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:29:55.235360 | orchestrator | 2026-03-17 00:29:55.235379 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-17 00:29:55.235400 | orchestrator | Tuesday 17 March 2026 00:29:53 +0000 (0:00:00.856) 0:01:03.548 ********* 2026-03-17 00:29:55.235420 | orchestrator | ok: [testbed-manager] 2026-03-17 00:29:55.235440 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:29:55.235459 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:29:55.235479 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:29:55.235499 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:29:55.235519 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:29:55.235539 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:29:55.235560 | orchestrator | 2026-03-17 00:29:55.235580 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-17 00:29:55.235601 | orchestrator | Tuesday 17 March 2026 00:29:54 +0000 (0:00:00.903) 0:01:04.452 ********* 2026-03-17 00:29:55.235621 | orchestrator | ok: [testbed-manager] 2026-03-17 00:29:55.235651 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:29:55.235669 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:29:55.235680 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:29:55.235690 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:29:55.235701 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:29:55.235711 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:29:55.235722 | orchestrator | 2026-03-17 00:29:55.235737 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-17 00:29:55.235755 | orchestrator | Tuesday 17 March 2026 00:29:54 +0000 (0:00:00.200) 0:01:04.653 ********* 2026-03-17 00:29:55.235775 | orchestrator | ok: [testbed-manager] 2026-03-17 00:29:55.235793 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:29:55.235808 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:29:55.235819 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:29:55.235829 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:29:55.235840 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:29:55.235851 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:29:55.235861 | orchestrator | 2026-03-17 00:29:55.235872 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-17 00:29:55.235883 | orchestrator | Tuesday 17 March 2026 00:29:54 +0000 (0:00:00.193) 0:01:04.846 ********* 2026-03-17 00:29:55.235894 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:29:55.235906 | orchestrator | 2026-03-17 00:29:55.235928 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-17 00:32:23.877026 | orchestrator | Tuesday 17 March 2026 00:29:55 +0000 (0:00:00.276) 0:01:05.123 ********* 2026-03-17 00:32:23.877139 | orchestrator | ok: [testbed-manager] 2026-03-17 00:32:23.877156 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:32:23.877168 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:32:23.877179 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:32:23.877189 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:32:23.877200 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:32:23.877211 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:32:23.877221 | orchestrator | 2026-03-17 00:32:23.877233 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-17 00:32:23.877244 | orchestrator | Tuesday 17 March 2026 00:29:56 +0000 (0:00:01.623) 0:01:06.746 ********* 2026-03-17 00:32:23.877254 | orchestrator | changed: [testbed-manager] 2026-03-17 00:32:23.877266 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:32:23.877277 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:32:23.877288 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:32:23.877298 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:32:23.877309 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:32:23.877319 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:32:23.877330 | orchestrator | 2026-03-17 00:32:23.877341 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-17 00:32:23.877352 | orchestrator | Tuesday 17 March 2026 00:29:57 +0000 (0:00:00.661) 0:01:07.408 ********* 2026-03-17 00:32:23.877363 | orchestrator | ok: [testbed-manager] 2026-03-17 00:32:23.877374 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:32:23.877384 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:32:23.877395 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:32:23.877406 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:32:23.877416 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:32:23.877426 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:32:23.877437 | orchestrator | 2026-03-17 00:32:23.877448 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-17 00:32:23.877460 | orchestrator | Tuesday 17 March 2026 00:29:57 +0000 (0:00:00.214) 0:01:07.625 ********* 2026-03-17 00:32:23.877472 | orchestrator | ok: [testbed-manager] 2026-03-17 00:32:23.877484 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:32:23.877496 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:32:23.877507 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:32:23.877519 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:32:23.877532 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:32:23.877544 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:32:23.877556 | orchestrator | 2026-03-17 00:32:23.877568 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-17 00:32:23.877581 | orchestrator | Tuesday 17 March 2026 00:29:58 +0000 (0:00:01.258) 0:01:08.884 ********* 2026-03-17 00:32:23.877594 | orchestrator | changed: [testbed-manager] 2026-03-17 00:32:23.877607 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:32:23.877619 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:32:23.877631 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:32:23.877643 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:32:23.877655 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:32:23.877673 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:32:23.877691 | orchestrator | 2026-03-17 00:32:23.877719 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-17 00:32:23.877746 | orchestrator | Tuesday 17 March 2026 00:30:00 +0000 (0:00:01.864) 0:01:10.748 ********* 2026-03-17 00:32:23.877764 | orchestrator | ok: [testbed-manager] 2026-03-17 00:32:23.877785 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:32:23.877806 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:32:23.877825 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:32:23.877843 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:32:23.877857 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:32:23.877868 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:32:23.877879 | orchestrator | 2026-03-17 00:32:23.877890 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-17 00:32:23.877953 | orchestrator | Tuesday 17 March 2026 00:30:03 +0000 (0:00:02.701) 0:01:13.450 ********* 2026-03-17 00:32:23.877966 | orchestrator | ok: [testbed-manager] 2026-03-17 00:32:23.877977 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:32:23.877987 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:32:23.877998 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:32:23.878008 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:32:23.878079 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:32:23.878091 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:32:23.878102 | orchestrator | 2026-03-17 00:32:23.878113 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-17 00:32:23.878124 | orchestrator | Tuesday 17 March 2026 00:30:50 +0000 (0:00:47.061) 0:02:00.511 ********* 2026-03-17 00:32:23.878134 | orchestrator | changed: [testbed-manager] 2026-03-17 00:32:23.878145 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:32:23.878156 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:32:23.878167 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:32:23.878178 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:32:23.878188 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:32:23.878199 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:32:23.878210 | orchestrator | 2026-03-17 00:32:23.878221 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-17 00:32:23.878232 | orchestrator | Tuesday 17 March 2026 00:32:09 +0000 (0:01:19.332) 0:03:19.844 ********* 2026-03-17 00:32:23.878243 | orchestrator | ok: [testbed-manager] 2026-03-17 00:32:23.878254 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:32:23.878265 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:32:23.878275 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:32:23.878286 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:32:23.878297 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:32:23.878307 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:32:23.878318 | orchestrator | 2026-03-17 00:32:23.878329 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-17 00:32:23.878340 | orchestrator | Tuesday 17 March 2026 00:32:11 +0000 (0:00:02.027) 0:03:21.872 ********* 2026-03-17 00:32:23.878350 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:32:23.878361 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:32:23.878371 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:32:23.878382 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:32:23.878392 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:32:23.878403 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:32:23.878414 | orchestrator | changed: [testbed-manager] 2026-03-17 00:32:23.878424 | orchestrator | 2026-03-17 00:32:23.878435 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-17 00:32:23.878446 | orchestrator | Tuesday 17 March 2026 00:32:22 +0000 (0:00:10.718) 0:03:32.590 ********* 2026-03-17 00:32:23.878492 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-17 00:32:23.878537 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-17 00:32:23.878553 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-17 00:32:23.878576 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-17 00:32:23.878588 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-17 00:32:23.878599 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-17 00:32:23.878610 | orchestrator | 2026-03-17 00:32:23.878621 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-17 00:32:23.878632 | orchestrator | Tuesday 17 March 2026 00:32:23 +0000 (0:00:00.387) 0:03:32.978 ********* 2026-03-17 00:32:23.878643 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-17 00:32:23.878662 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:32:23.878680 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-17 00:32:23.878698 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:32:23.878716 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-17 00:32:23.878743 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-17 00:32:23.878762 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:32:23.878780 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:32:23.878791 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-17 00:32:23.878802 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-17 00:32:23.878812 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-17 00:32:23.878823 | orchestrator | 2026-03-17 00:32:23.878833 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-17 00:32:23.878844 | orchestrator | Tuesday 17 March 2026 00:32:23 +0000 (0:00:00.720) 0:03:33.699 ********* 2026-03-17 00:32:23.878854 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-17 00:32:23.878866 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-17 00:32:23.878877 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-17 00:32:23.878888 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-17 00:32:23.878898 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-17 00:32:23.878953 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-17 00:32:30.182514 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-17 00:32:30.182601 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-17 00:32:30.182612 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-17 00:32:30.182640 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-17 00:32:30.182649 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-17 00:32:30.182657 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-17 00:32:30.182664 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-17 00:32:30.182671 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-17 00:32:30.182678 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-17 00:32:30.182685 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-17 00:32:30.182692 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-17 00:32:30.182700 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-17 00:32:30.182706 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-17 00:32:30.182712 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-17 00:32:30.182719 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-17 00:32:30.182725 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-17 00:32:30.182732 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-17 00:32:30.182739 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-17 00:32:30.182746 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:32:30.182754 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-17 00:32:30.182761 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-17 00:32:30.182768 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-17 00:32:30.182775 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-17 00:32:30.182782 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-17 00:32:30.182788 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-17 00:32:30.182796 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-17 00:32:30.182803 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-17 00:32:30.182809 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-17 00:32:30.182816 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:32:30.182823 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-17 00:32:30.182830 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-17 00:32:30.182849 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-17 00:32:30.182857 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-17 00:32:30.182864 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-17 00:32:30.182870 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-17 00:32:30.182877 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-17 00:32:30.182890 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:32:30.182896 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:32:30.182903 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-17 00:32:30.182910 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-17 00:32:30.182979 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-17 00:32:30.182986 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-17 00:32:30.182993 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-17 00:32:30.183014 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-17 00:32:30.183020 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-17 00:32:30.183026 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-17 00:32:30.183032 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-17 00:32:30.183048 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-17 00:32:30.183056 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-17 00:32:30.183063 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-17 00:32:30.183070 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-17 00:32:30.183076 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-17 00:32:30.183083 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-17 00:32:30.183089 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-17 00:32:30.183096 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-17 00:32:30.183103 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-17 00:32:30.183110 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-17 00:32:30.183117 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-17 00:32:30.183124 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-17 00:32:30.183131 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-17 00:32:30.183138 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-17 00:32:30.183145 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-17 00:32:30.183151 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-17 00:32:30.183159 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-17 00:32:30.183166 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-17 00:32:30.183172 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-17 00:32:30.183179 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-17 00:32:30.183186 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-17 00:32:30.183194 | orchestrator | 2026-03-17 00:32:30.183202 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-17 00:32:30.183215 | orchestrator | Tuesday 17 March 2026 00:32:29 +0000 (0:00:05.210) 0:03:38.909 ********* 2026-03-17 00:32:30.183222 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-17 00:32:30.183229 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-17 00:32:30.183235 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-17 00:32:30.183242 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-17 00:32:30.183249 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-17 00:32:30.183260 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-17 00:32:30.183267 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-17 00:32:30.183273 | orchestrator | 2026-03-17 00:32:30.183280 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-17 00:32:30.183287 | orchestrator | Tuesday 17 March 2026 00:32:29 +0000 (0:00:00.636) 0:03:39.545 ********* 2026-03-17 00:32:30.183294 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-17 00:32:30.183301 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:32:30.183308 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-17 00:32:30.183315 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-17 00:32:30.183322 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:32:30.183329 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:32:30.183336 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-17 00:32:30.183343 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:32:30.183350 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-17 00:32:30.183356 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-17 00:32:30.183368 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-17 00:32:43.261815 | orchestrator | 2026-03-17 00:32:43.262149 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-17 00:32:43.262179 | orchestrator | Tuesday 17 March 2026 00:32:30 +0000 (0:00:00.529) 0:03:40.075 ********* 2026-03-17 00:32:43.262191 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-17 00:32:43.262204 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:32:43.262217 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-17 00:32:43.262228 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:32:43.262239 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-17 00:32:43.262250 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-17 00:32:43.262261 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:32:43.262271 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:32:43.262283 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-17 00:32:43.262295 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-17 00:32:43.262307 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-17 00:32:43.262320 | orchestrator | 2026-03-17 00:32:43.262332 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-17 00:32:43.262372 | orchestrator | Tuesday 17 March 2026 00:32:30 +0000 (0:00:00.626) 0:03:40.701 ********* 2026-03-17 00:32:43.262386 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-17 00:32:43.262399 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:32:43.262411 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-17 00:32:43.262424 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-17 00:32:43.262436 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:32:43.262448 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:32:43.262460 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-17 00:32:43.262473 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:32:43.262485 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-17 00:32:43.262497 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-17 00:32:43.262509 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-17 00:32:43.262522 | orchestrator | 2026-03-17 00:32:43.262535 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-17 00:32:43.262546 | orchestrator | Tuesday 17 March 2026 00:32:31 +0000 (0:00:00.616) 0:03:41.317 ********* 2026-03-17 00:32:43.262557 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:32:43.262567 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:32:43.262578 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:32:43.262589 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:32:43.262599 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:32:43.262610 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:32:43.262621 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:32:43.262631 | orchestrator | 2026-03-17 00:32:43.262642 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-17 00:32:43.262653 | orchestrator | Tuesday 17 March 2026 00:32:31 +0000 (0:00:00.276) 0:03:41.594 ********* 2026-03-17 00:32:43.262664 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:32:43.262676 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:32:43.262687 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:32:43.262697 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:32:43.262708 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:32:43.262718 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:32:43.262729 | orchestrator | ok: [testbed-manager] 2026-03-17 00:32:43.262739 | orchestrator | 2026-03-17 00:32:43.262750 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-17 00:32:43.262761 | orchestrator | Tuesday 17 March 2026 00:32:36 +0000 (0:00:05.081) 0:03:46.676 ********* 2026-03-17 00:32:43.262772 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-17 00:32:43.262783 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:32:43.262793 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-17 00:32:43.262804 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:32:43.262814 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-17 00:32:43.262825 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:32:43.262835 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-17 00:32:43.262846 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-17 00:32:43.262856 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:32:43.262886 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-17 00:32:43.262897 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:32:43.262935 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:32:43.262948 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-17 00:32:43.262958 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:32:43.262969 | orchestrator | 2026-03-17 00:32:43.262980 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-17 00:32:43.262999 | orchestrator | Tuesday 17 March 2026 00:32:37 +0000 (0:00:00.314) 0:03:46.991 ********* 2026-03-17 00:32:43.263010 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-17 00:32:43.263021 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-17 00:32:43.263031 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-17 00:32:43.263064 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-17 00:32:43.263076 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-17 00:32:43.263086 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-17 00:32:43.263096 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-17 00:32:43.263107 | orchestrator | 2026-03-17 00:32:43.263118 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-17 00:32:43.263128 | orchestrator | Tuesday 17 March 2026 00:32:38 +0000 (0:00:01.226) 0:03:48.217 ********* 2026-03-17 00:32:43.263142 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:32:43.263155 | orchestrator | 2026-03-17 00:32:43.263166 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-17 00:32:43.263176 | orchestrator | Tuesday 17 March 2026 00:32:38 +0000 (0:00:00.479) 0:03:48.696 ********* 2026-03-17 00:32:43.263187 | orchestrator | ok: [testbed-manager] 2026-03-17 00:32:43.263197 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:32:43.263208 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:32:43.263218 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:32:43.263229 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:32:43.263243 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:32:43.263261 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:32:43.263278 | orchestrator | 2026-03-17 00:32:43.263294 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-17 00:32:43.263309 | orchestrator | Tuesday 17 March 2026 00:32:40 +0000 (0:00:01.413) 0:03:50.109 ********* 2026-03-17 00:32:43.263326 | orchestrator | ok: [testbed-manager] 2026-03-17 00:32:43.263344 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:32:43.263363 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:32:43.263382 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:32:43.263401 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:32:43.263418 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:32:43.263435 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:32:43.263446 | orchestrator | 2026-03-17 00:32:43.263456 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-17 00:32:43.263482 | orchestrator | Tuesday 17 March 2026 00:32:40 +0000 (0:00:00.681) 0:03:50.791 ********* 2026-03-17 00:32:43.263494 | orchestrator | changed: [testbed-manager] 2026-03-17 00:32:43.263515 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:32:43.263526 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:32:43.263536 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:32:43.263547 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:32:43.263557 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:32:43.263568 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:32:43.263578 | orchestrator | 2026-03-17 00:32:43.263588 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-17 00:32:43.263599 | orchestrator | Tuesday 17 March 2026 00:32:41 +0000 (0:00:00.679) 0:03:51.470 ********* 2026-03-17 00:32:43.263610 | orchestrator | ok: [testbed-manager] 2026-03-17 00:32:43.263620 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:32:43.263631 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:32:43.263641 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:32:43.263652 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:32:43.263662 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:32:43.263672 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:32:43.263683 | orchestrator | 2026-03-17 00:32:43.263693 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-17 00:32:43.263713 | orchestrator | Tuesday 17 March 2026 00:32:42 +0000 (0:00:00.619) 0:03:52.090 ********* 2026-03-17 00:32:43.263736 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773705916.5273201, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:32:43.263751 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773705912.642263, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:32:43.263762 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773705937.205001, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:32:43.263798 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773705910.7998788, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:32:48.509525 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773705930.4391563, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:32:48.509612 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773705924.417565, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:32:48.509621 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773705940.763253, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:32:48.509646 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:32:48.509664 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:32:48.509670 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:32:48.509676 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:32:48.509702 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:32:48.509708 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:32:48.509714 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:32:48.509725 | orchestrator | 2026-03-17 00:32:48.509733 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-17 00:32:48.509740 | orchestrator | Tuesday 17 March 2026 00:32:43 +0000 (0:00:01.062) 0:03:53.153 ********* 2026-03-17 00:32:48.509746 | orchestrator | changed: [testbed-manager] 2026-03-17 00:32:48.509753 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:32:48.509770 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:32:48.509776 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:32:48.509781 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:32:48.509787 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:32:48.509793 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:32:48.509799 | orchestrator | 2026-03-17 00:32:48.509805 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-17 00:32:48.509810 | orchestrator | Tuesday 17 March 2026 00:32:44 +0000 (0:00:01.232) 0:03:54.386 ********* 2026-03-17 00:32:48.509816 | orchestrator | changed: [testbed-manager] 2026-03-17 00:32:48.509822 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:32:48.509827 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:32:48.509833 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:32:48.509838 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:32:48.509844 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:32:48.509850 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:32:48.509855 | orchestrator | 2026-03-17 00:32:48.509865 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-17 00:32:48.509871 | orchestrator | Tuesday 17 March 2026 00:32:45 +0000 (0:00:01.258) 0:03:55.645 ********* 2026-03-17 00:32:48.509877 | orchestrator | changed: [testbed-manager] 2026-03-17 00:32:48.509882 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:32:48.509888 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:32:48.509893 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:32:48.509940 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:32:48.509948 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:32:48.509953 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:32:48.509959 | orchestrator | 2026-03-17 00:32:48.509965 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-17 00:32:48.509971 | orchestrator | Tuesday 17 March 2026 00:32:47 +0000 (0:00:01.341) 0:03:56.986 ********* 2026-03-17 00:32:48.509977 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:32:48.509982 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:32:48.509988 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:32:48.509994 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:32:48.509999 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:32:48.510005 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:32:48.510011 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:32:48.510070 | orchestrator | 2026-03-17 00:32:48.510082 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-17 00:32:48.510092 | orchestrator | Tuesday 17 March 2026 00:32:47 +0000 (0:00:00.282) 0:03:57.269 ********* 2026-03-17 00:32:48.510103 | orchestrator | ok: [testbed-manager] 2026-03-17 00:32:48.510115 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:32:48.510125 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:32:48.510136 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:32:48.510143 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:32:48.510149 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:32:48.510156 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:32:48.510163 | orchestrator | 2026-03-17 00:32:48.510169 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-17 00:32:48.510176 | orchestrator | Tuesday 17 March 2026 00:32:48 +0000 (0:00:00.738) 0:03:58.007 ********* 2026-03-17 00:32:48.510185 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:32:48.510200 | orchestrator | 2026-03-17 00:32:48.510206 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-17 00:32:48.510220 | orchestrator | Tuesday 17 March 2026 00:32:48 +0000 (0:00:00.399) 0:03:58.407 ********* 2026-03-17 00:34:09.931053 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:09.931195 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:34:09.931222 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:34:09.931243 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:34:09.931263 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:34:09.931284 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:34:09.931302 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:34:09.931324 | orchestrator | 2026-03-17 00:34:09.931345 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-17 00:34:09.931364 | orchestrator | Tuesday 17 March 2026 00:32:57 +0000 (0:00:09.391) 0:04:07.798 ********* 2026-03-17 00:34:09.931376 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:09.931387 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:34:09.931398 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:34:09.931408 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:34:09.931420 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:09.931430 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:34:09.931441 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:09.931452 | orchestrator | 2026-03-17 00:34:09.931463 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-17 00:34:09.931474 | orchestrator | Tuesday 17 March 2026 00:32:59 +0000 (0:00:01.462) 0:04:09.261 ********* 2026-03-17 00:34:09.931485 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:34:09.931495 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:09.931506 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:09.931517 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:34:09.931527 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:09.931540 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:34:09.931552 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:34:09.931565 | orchestrator | 2026-03-17 00:34:09.931577 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-17 00:34:09.931590 | orchestrator | Tuesday 17 March 2026 00:33:00 +0000 (0:00:01.213) 0:04:10.475 ********* 2026-03-17 00:34:09.931603 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:09.931614 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:34:09.931627 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:09.931639 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:09.931651 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:34:09.931665 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:34:09.931677 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:34:09.931689 | orchestrator | 2026-03-17 00:34:09.931703 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-17 00:34:09.931716 | orchestrator | Tuesday 17 March 2026 00:33:00 +0000 (0:00:00.257) 0:04:10.732 ********* 2026-03-17 00:34:09.931735 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:09.931757 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:34:09.931785 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:09.931802 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:09.931890 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:34:09.931907 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:34:09.931923 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:34:09.931939 | orchestrator | 2026-03-17 00:34:09.931956 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-17 00:34:09.931974 | orchestrator | Tuesday 17 March 2026 00:33:01 +0000 (0:00:00.297) 0:04:11.030 ********* 2026-03-17 00:34:09.931992 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:09.932010 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:34:09.932027 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:09.932044 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:09.932094 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:34:09.932111 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:34:09.932129 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:34:09.932146 | orchestrator | 2026-03-17 00:34:09.932164 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-17 00:34:09.932183 | orchestrator | Tuesday 17 March 2026 00:33:01 +0000 (0:00:00.299) 0:04:11.329 ********* 2026-03-17 00:34:09.932200 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:09.932218 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:34:09.932235 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:34:09.932254 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:34:09.932272 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:09.932291 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:34:09.932311 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:09.932327 | orchestrator | 2026-03-17 00:34:09.932346 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-17 00:34:09.932365 | orchestrator | Tuesday 17 March 2026 00:33:06 +0000 (0:00:05.169) 0:04:16.498 ********* 2026-03-17 00:34:09.932388 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:34:09.932412 | orchestrator | 2026-03-17 00:34:09.932432 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-17 00:34:09.932451 | orchestrator | Tuesday 17 March 2026 00:33:06 +0000 (0:00:00.361) 0:04:16.860 ********* 2026-03-17 00:34:09.932470 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-17 00:34:09.932487 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-17 00:34:09.932505 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:34:09.932523 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-17 00:34:09.932541 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-17 00:34:09.932584 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-17 00:34:09.932605 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-17 00:34:09.932624 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:34:09.932644 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-17 00:34:09.932663 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-17 00:34:09.932683 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:34:09.932703 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:34:09.932722 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-17 00:34:09.932741 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-17 00:34:09.932760 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-17 00:34:09.932779 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:34:09.932857 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-17 00:34:09.932881 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:34:09.932903 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-17 00:34:09.932924 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-17 00:34:09.932942 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:34:09.932962 | orchestrator | 2026-03-17 00:34:09.932981 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-17 00:34:09.933000 | orchestrator | Tuesday 17 March 2026 00:33:07 +0000 (0:00:00.303) 0:04:17.163 ********* 2026-03-17 00:34:09.933021 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:34:09.933040 | orchestrator | 2026-03-17 00:34:09.933059 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-17 00:34:09.933078 | orchestrator | Tuesday 17 March 2026 00:33:07 +0000 (0:00:00.340) 0:04:17.503 ********* 2026-03-17 00:34:09.933156 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-17 00:34:09.933179 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-17 00:34:09.933198 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:34:09.933218 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-17 00:34:09.933236 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:34:09.933254 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-17 00:34:09.933271 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:34:09.933290 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-17 00:34:09.933309 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:34:09.933328 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-17 00:34:09.933349 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:34:09.933369 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:34:09.933390 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-17 00:34:09.933411 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:34:09.933432 | orchestrator | 2026-03-17 00:34:09.933452 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-17 00:34:09.933473 | orchestrator | Tuesday 17 March 2026 00:33:07 +0000 (0:00:00.319) 0:04:17.823 ********* 2026-03-17 00:34:09.933494 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:34:09.933513 | orchestrator | 2026-03-17 00:34:09.933534 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-17 00:34:09.933555 | orchestrator | Tuesday 17 March 2026 00:33:08 +0000 (0:00:00.415) 0:04:18.239 ********* 2026-03-17 00:34:09.933576 | orchestrator | changed: [testbed-manager] 2026-03-17 00:34:09.933597 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:34:09.933619 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:34:09.933641 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:34:09.933663 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:34:09.933696 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:34:09.933717 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:34:09.933739 | orchestrator | 2026-03-17 00:34:09.933760 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-17 00:34:09.933780 | orchestrator | Tuesday 17 March 2026 00:33:42 +0000 (0:00:34.115) 0:04:52.354 ********* 2026-03-17 00:34:09.933801 | orchestrator | changed: [testbed-manager] 2026-03-17 00:34:09.933893 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:34:09.933914 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:34:09.933933 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:34:09.933953 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:34:09.933971 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:34:09.933989 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:34:09.934007 | orchestrator | 2026-03-17 00:34:09.934115 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-17 00:34:09.934136 | orchestrator | Tuesday 17 March 2026 00:33:52 +0000 (0:00:09.690) 0:05:02.044 ********* 2026-03-17 00:34:09.934157 | orchestrator | changed: [testbed-manager] 2026-03-17 00:34:09.934173 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:34:09.934184 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:34:09.934195 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:34:09.934205 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:34:09.934216 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:34:09.934227 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:34:09.934237 | orchestrator | 2026-03-17 00:34:09.934248 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-17 00:34:09.934259 | orchestrator | Tuesday 17 March 2026 00:34:00 +0000 (0:00:08.771) 0:05:10.816 ********* 2026-03-17 00:34:09.934284 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:09.934296 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:34:09.934306 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:34:09.934317 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:09.934328 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:34:09.934338 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:09.934349 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:34:09.934379 | orchestrator | 2026-03-17 00:34:09.934391 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-17 00:34:09.934403 | orchestrator | Tuesday 17 March 2026 00:34:02 +0000 (0:00:02.026) 0:05:12.843 ********* 2026-03-17 00:34:09.934426 | orchestrator | changed: [testbed-manager] 2026-03-17 00:34:09.934437 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:34:09.934448 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:34:09.934458 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:34:09.934475 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:34:09.934494 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:34:09.934512 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:34:09.934530 | orchestrator | 2026-03-17 00:34:09.934570 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-17 00:34:21.138238 | orchestrator | Tuesday 17 March 2026 00:34:09 +0000 (0:00:06.976) 0:05:19.820 ********* 2026-03-17 00:34:21.138371 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:34:21.138400 | orchestrator | 2026-03-17 00:34:21.138421 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-17 00:34:21.138439 | orchestrator | Tuesday 17 March 2026 00:34:10 +0000 (0:00:00.522) 0:05:20.342 ********* 2026-03-17 00:34:21.138458 | orchestrator | changed: [testbed-manager] 2026-03-17 00:34:21.138479 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:34:21.138497 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:34:21.138516 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:34:21.138534 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:34:21.138551 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:34:21.138568 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:34:21.138586 | orchestrator | 2026-03-17 00:34:21.138604 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-17 00:34:21.138621 | orchestrator | Tuesday 17 March 2026 00:34:11 +0000 (0:00:00.774) 0:05:21.116 ********* 2026-03-17 00:34:21.138638 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:21.138657 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:34:21.138676 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:34:21.138695 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:34:21.138712 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:21.138727 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:21.138743 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:34:21.138761 | orchestrator | 2026-03-17 00:34:21.138779 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-17 00:34:21.138829 | orchestrator | Tuesday 17 March 2026 00:34:13 +0000 (0:00:01.970) 0:05:23.087 ********* 2026-03-17 00:34:21.138851 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:34:21.138869 | orchestrator | changed: [testbed-manager] 2026-03-17 00:34:21.138887 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:34:21.138905 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:34:21.138922 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:34:21.138943 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:34:21.138961 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:34:21.138977 | orchestrator | 2026-03-17 00:34:21.138996 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-17 00:34:21.139014 | orchestrator | Tuesday 17 March 2026 00:34:13 +0000 (0:00:00.776) 0:05:23.864 ********* 2026-03-17 00:34:21.139033 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:34:21.139085 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:34:21.139106 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:34:21.139123 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:34:21.139140 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:34:21.139156 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:34:21.139171 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:34:21.139187 | orchestrator | 2026-03-17 00:34:21.139203 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-17 00:34:21.139220 | orchestrator | Tuesday 17 March 2026 00:34:14 +0000 (0:00:00.263) 0:05:24.127 ********* 2026-03-17 00:34:21.139237 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:34:21.139253 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:34:21.139269 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:34:21.139285 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:34:21.139321 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:34:21.139340 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:34:21.139355 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:34:21.139371 | orchestrator | 2026-03-17 00:34:21.139387 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-17 00:34:21.139404 | orchestrator | Tuesday 17 March 2026 00:34:14 +0000 (0:00:00.345) 0:05:24.472 ********* 2026-03-17 00:34:21.139419 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:21.139436 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:34:21.139452 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:21.139469 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:21.139484 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:34:21.139500 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:34:21.139516 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:34:21.139533 | orchestrator | 2026-03-17 00:34:21.139550 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-17 00:34:21.139566 | orchestrator | Tuesday 17 March 2026 00:34:14 +0000 (0:00:00.263) 0:05:24.736 ********* 2026-03-17 00:34:21.139584 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:34:21.139600 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:34:21.139616 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:34:21.139632 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:34:21.139648 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:34:21.139664 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:34:21.139679 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:34:21.139695 | orchestrator | 2026-03-17 00:34:21.139711 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-17 00:34:21.139727 | orchestrator | Tuesday 17 March 2026 00:34:15 +0000 (0:00:00.254) 0:05:24.990 ********* 2026-03-17 00:34:21.139743 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:21.139760 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:34:21.139776 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:21.139793 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:21.139838 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:34:21.139854 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:34:21.139869 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:34:21.139886 | orchestrator | 2026-03-17 00:34:21.139903 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-17 00:34:21.139919 | orchestrator | Tuesday 17 March 2026 00:34:15 +0000 (0:00:00.279) 0:05:25.269 ********* 2026-03-17 00:34:21.139935 | orchestrator | ok: [testbed-manager] =>  2026-03-17 00:34:21.139951 | orchestrator |  docker_version: 5:27.5.1 2026-03-17 00:34:21.139966 | orchestrator | ok: [testbed-node-3] =>  2026-03-17 00:34:21.139982 | orchestrator |  docker_version: 5:27.5.1 2026-03-17 00:34:21.139998 | orchestrator | ok: [testbed-node-4] =>  2026-03-17 00:34:21.140013 | orchestrator |  docker_version: 5:27.5.1 2026-03-17 00:34:21.140029 | orchestrator | ok: [testbed-node-5] =>  2026-03-17 00:34:21.140045 | orchestrator |  docker_version: 5:27.5.1 2026-03-17 00:34:21.140092 | orchestrator | ok: [testbed-node-0] =>  2026-03-17 00:34:21.140130 | orchestrator |  docker_version: 5:27.5.1 2026-03-17 00:34:21.140146 | orchestrator | ok: [testbed-node-1] =>  2026-03-17 00:34:21.140165 | orchestrator |  docker_version: 5:27.5.1 2026-03-17 00:34:21.140180 | orchestrator | ok: [testbed-node-2] =>  2026-03-17 00:34:21.140196 | orchestrator |  docker_version: 5:27.5.1 2026-03-17 00:34:21.140212 | orchestrator | 2026-03-17 00:34:21.140230 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-17 00:34:21.140247 | orchestrator | Tuesday 17 March 2026 00:34:15 +0000 (0:00:00.265) 0:05:25.534 ********* 2026-03-17 00:34:21.140264 | orchestrator | ok: [testbed-manager] =>  2026-03-17 00:34:21.140280 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-17 00:34:21.140296 | orchestrator | ok: [testbed-node-3] =>  2026-03-17 00:34:21.140316 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-17 00:34:21.140335 | orchestrator | ok: [testbed-node-4] =>  2026-03-17 00:34:21.140351 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-17 00:34:21.140368 | orchestrator | ok: [testbed-node-5] =>  2026-03-17 00:34:21.140386 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-17 00:34:21.140404 | orchestrator | ok: [testbed-node-0] =>  2026-03-17 00:34:21.140421 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-17 00:34:21.140438 | orchestrator | ok: [testbed-node-1] =>  2026-03-17 00:34:21.140456 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-17 00:34:21.140473 | orchestrator | ok: [testbed-node-2] =>  2026-03-17 00:34:21.140490 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-17 00:34:21.140508 | orchestrator | 2026-03-17 00:34:21.140526 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-17 00:34:21.140544 | orchestrator | Tuesday 17 March 2026 00:34:15 +0000 (0:00:00.272) 0:05:25.807 ********* 2026-03-17 00:34:21.140562 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:34:21.140579 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:34:21.140597 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:34:21.140614 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:34:21.140632 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:34:21.140650 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:34:21.140669 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:34:21.140686 | orchestrator | 2026-03-17 00:34:21.140703 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-17 00:34:21.140720 | orchestrator | Tuesday 17 March 2026 00:34:16 +0000 (0:00:00.235) 0:05:26.043 ********* 2026-03-17 00:34:21.140737 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:34:21.140754 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:34:21.140771 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:34:21.140788 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:34:21.140946 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:34:21.140967 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:34:21.140983 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:34:21.140999 | orchestrator | 2026-03-17 00:34:21.141015 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-17 00:34:21.141032 | orchestrator | Tuesday 17 March 2026 00:34:16 +0000 (0:00:00.248) 0:05:26.292 ********* 2026-03-17 00:34:21.141052 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:34:21.141074 | orchestrator | 2026-03-17 00:34:21.141092 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-17 00:34:21.141126 | orchestrator | Tuesday 17 March 2026 00:34:16 +0000 (0:00:00.385) 0:05:26.678 ********* 2026-03-17 00:34:21.141144 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:21.141164 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:34:21.141176 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:21.141186 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:34:21.141197 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:21.141208 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:34:21.141232 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:34:21.141242 | orchestrator | 2026-03-17 00:34:21.141253 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-17 00:34:21.141263 | orchestrator | Tuesday 17 March 2026 00:34:17 +0000 (0:00:00.987) 0:05:27.665 ********* 2026-03-17 00:34:21.141272 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:21.141281 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:34:21.141291 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:21.141300 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:34:21.141309 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:34:21.141319 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:34:21.141328 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:21.141337 | orchestrator | 2026-03-17 00:34:21.141347 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-17 00:34:21.141358 | orchestrator | Tuesday 17 March 2026 00:34:20 +0000 (0:00:03.019) 0:05:30.685 ********* 2026-03-17 00:34:21.141367 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-17 00:34:21.141377 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-17 00:34:21.141387 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-17 00:34:21.141396 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-17 00:34:21.141406 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-17 00:34:21.141415 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-17 00:34:21.141425 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:34:21.141434 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-17 00:34:21.141443 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-17 00:34:21.141453 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-17 00:34:21.141462 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:34:21.141472 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-17 00:34:21.141481 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-17 00:34:21.141490 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-17 00:34:21.141500 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:34:21.141510 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-17 00:34:21.141535 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-17 00:35:27.845948 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-17 00:35:27.846095 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:35:27.846108 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-17 00:35:27.846124 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-17 00:35:27.846131 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-17 00:35:27.846137 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:35:27.846150 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:35:27.846156 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-17 00:35:27.846162 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-17 00:35:27.846168 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-17 00:35:27.846174 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:35:27.846180 | orchestrator | 2026-03-17 00:35:27.846187 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-17 00:35:27.846195 | orchestrator | Tuesday 17 March 2026 00:34:21 +0000 (0:00:00.538) 0:05:31.223 ********* 2026-03-17 00:35:27.846200 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:27.846207 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:35:27.846212 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:35:27.846219 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:35:27.846224 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:35:27.846231 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:35:27.846237 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:35:27.846264 | orchestrator | 2026-03-17 00:35:27.846274 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-17 00:35:27.846283 | orchestrator | Tuesday 17 March 2026 00:34:28 +0000 (0:00:07.513) 0:05:38.736 ********* 2026-03-17 00:35:27.846292 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:27.846301 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:35:27.846309 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:35:27.846318 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:35:27.846328 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:35:27.846337 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:35:27.846346 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:35:27.846355 | orchestrator | 2026-03-17 00:35:27.846365 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-17 00:35:27.846374 | orchestrator | Tuesday 17 March 2026 00:34:29 +0000 (0:00:01.112) 0:05:39.849 ********* 2026-03-17 00:35:27.846380 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:27.846385 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:35:27.846391 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:35:27.846397 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:35:27.846402 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:35:27.846408 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:35:27.846413 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:35:27.846419 | orchestrator | 2026-03-17 00:35:27.846425 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-17 00:35:27.846431 | orchestrator | Tuesday 17 March 2026 00:34:38 +0000 (0:00:08.884) 0:05:48.734 ********* 2026-03-17 00:35:27.846437 | orchestrator | changed: [testbed-manager] 2026-03-17 00:35:27.846442 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:35:27.846448 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:35:27.846453 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:35:27.846459 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:35:27.846465 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:35:27.846470 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:35:27.846476 | orchestrator | 2026-03-17 00:35:27.846482 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-17 00:35:27.846487 | orchestrator | Tuesday 17 March 2026 00:34:42 +0000 (0:00:03.350) 0:05:52.085 ********* 2026-03-17 00:35:27.846493 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:27.846499 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:35:27.846505 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:35:27.846512 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:35:27.846518 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:35:27.846525 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:35:27.846531 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:35:27.846537 | orchestrator | 2026-03-17 00:35:27.846544 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-17 00:35:27.846550 | orchestrator | Tuesday 17 March 2026 00:34:43 +0000 (0:00:01.408) 0:05:53.493 ********* 2026-03-17 00:35:27.846557 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:27.846564 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:35:27.846570 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:35:27.846576 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:35:27.846583 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:35:27.846589 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:35:27.846596 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:35:27.846602 | orchestrator | 2026-03-17 00:35:27.846608 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-17 00:35:27.846615 | orchestrator | Tuesday 17 March 2026 00:34:45 +0000 (0:00:01.703) 0:05:55.197 ********* 2026-03-17 00:35:27.846621 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:35:27.846628 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:35:27.846634 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:35:27.846640 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:35:27.846647 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:35:27.846659 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:35:27.846666 | orchestrator | changed: [testbed-manager] 2026-03-17 00:35:27.846672 | orchestrator | 2026-03-17 00:35:27.846678 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-17 00:35:27.846685 | orchestrator | Tuesday 17 March 2026 00:34:45 +0000 (0:00:00.655) 0:05:55.852 ********* 2026-03-17 00:35:27.846692 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:27.846698 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:35:27.846704 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:35:27.846728 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:35:27.846735 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:35:27.846742 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:35:27.846748 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:35:27.846755 | orchestrator | 2026-03-17 00:35:27.846761 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-17 00:35:27.846783 | orchestrator | Tuesday 17 March 2026 00:34:57 +0000 (0:00:11.140) 0:06:06.993 ********* 2026-03-17 00:35:27.846789 | orchestrator | changed: [testbed-manager] 2026-03-17 00:35:27.846796 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:35:27.846803 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:35:27.846808 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:35:27.846814 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:35:27.846819 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:35:27.846825 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:35:27.846831 | orchestrator | 2026-03-17 00:35:27.846836 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-17 00:35:27.846842 | orchestrator | Tuesday 17 March 2026 00:34:58 +0000 (0:00:00.950) 0:06:07.944 ********* 2026-03-17 00:35:27.846848 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:27.846854 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:35:27.846859 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:35:27.846865 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:35:27.846871 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:35:27.846876 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:35:27.846882 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:35:27.846887 | orchestrator | 2026-03-17 00:35:27.846893 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-17 00:35:27.846899 | orchestrator | Tuesday 17 March 2026 00:35:07 +0000 (0:00:09.541) 0:06:17.486 ********* 2026-03-17 00:35:27.846905 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:27.846910 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:35:27.846916 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:35:27.846921 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:35:27.846927 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:35:27.846933 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:35:27.846938 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:35:27.846944 | orchestrator | 2026-03-17 00:35:27.846949 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-17 00:35:27.846955 | orchestrator | Tuesday 17 March 2026 00:35:20 +0000 (0:00:12.565) 0:06:30.051 ********* 2026-03-17 00:35:27.846961 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-17 00:35:27.846967 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-17 00:35:27.846972 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-17 00:35:27.846978 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-17 00:35:27.846984 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-17 00:35:27.846989 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-17 00:35:27.846995 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-17 00:35:27.847001 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-17 00:35:27.847006 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-17 00:35:27.847012 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-17 00:35:27.847022 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-17 00:35:27.847073 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-17 00:35:27.847080 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-17 00:35:27.847086 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-17 00:35:27.847092 | orchestrator | 2026-03-17 00:35:27.847097 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-17 00:35:27.847103 | orchestrator | Tuesday 17 March 2026 00:35:21 +0000 (0:00:01.333) 0:06:31.384 ********* 2026-03-17 00:35:27.847109 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:35:27.847117 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:35:27.847123 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:35:27.847129 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:35:27.847134 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:35:27.847140 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:35:27.847145 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:35:27.847151 | orchestrator | 2026-03-17 00:35:27.847157 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-17 00:35:27.847163 | orchestrator | Tuesday 17 March 2026 00:35:21 +0000 (0:00:00.505) 0:06:31.890 ********* 2026-03-17 00:35:27.847168 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:27.847174 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:35:27.847179 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:35:27.847185 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:35:27.847191 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:35:27.847196 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:35:27.847202 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:35:27.847208 | orchestrator | 2026-03-17 00:35:27.847213 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-17 00:35:27.847220 | orchestrator | Tuesday 17 March 2026 00:35:26 +0000 (0:00:04.905) 0:06:36.795 ********* 2026-03-17 00:35:27.847226 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:35:27.847231 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:35:27.847237 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:35:27.847242 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:35:27.847248 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:35:27.847254 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:35:27.847259 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:35:27.847265 | orchestrator | 2026-03-17 00:35:27.847271 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-17 00:35:27.847277 | orchestrator | Tuesday 17 March 2026 00:35:27 +0000 (0:00:00.486) 0:06:37.281 ********* 2026-03-17 00:35:27.847283 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-17 00:35:27.847289 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-17 00:35:27.847295 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:35:27.847301 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-17 00:35:27.847306 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-17 00:35:27.847312 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:35:27.847319 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-17 00:35:27.847328 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-17 00:35:27.847338 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:35:27.847353 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-17 00:35:47.401799 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-17 00:35:47.401915 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:35:47.401930 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-17 00:35:47.401942 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-17 00:35:47.401953 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:35:47.401964 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-17 00:35:47.402001 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-17 00:35:47.402073 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:35:47.402086 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-17 00:35:47.402097 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-17 00:35:47.402108 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:35:47.402119 | orchestrator | 2026-03-17 00:35:47.402132 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-17 00:35:47.402144 | orchestrator | Tuesday 17 March 2026 00:35:28 +0000 (0:00:00.704) 0:06:37.986 ********* 2026-03-17 00:35:47.402155 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:35:47.402166 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:35:47.402176 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:35:47.402187 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:35:47.402198 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:35:47.402208 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:35:47.402219 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:35:47.402230 | orchestrator | 2026-03-17 00:35:47.402241 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-17 00:35:47.402252 | orchestrator | Tuesday 17 March 2026 00:35:28 +0000 (0:00:00.501) 0:06:38.488 ********* 2026-03-17 00:35:47.402263 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:35:47.402273 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:35:47.402286 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:35:47.402297 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:35:47.402310 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:35:47.402322 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:35:47.402334 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:35:47.402346 | orchestrator | 2026-03-17 00:35:47.402359 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-17 00:35:47.402372 | orchestrator | Tuesday 17 March 2026 00:35:29 +0000 (0:00:00.477) 0:06:38.965 ********* 2026-03-17 00:35:47.402384 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:35:47.402396 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:35:47.402409 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:35:47.402419 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:35:47.402430 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:35:47.402440 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:35:47.402451 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:35:47.402462 | orchestrator | 2026-03-17 00:35:47.402473 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-17 00:35:47.402484 | orchestrator | Tuesday 17 March 2026 00:35:29 +0000 (0:00:00.507) 0:06:39.473 ********* 2026-03-17 00:35:47.402495 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:47.402506 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:35:47.402516 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:35:47.402527 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:35:47.402538 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:35:47.402548 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:35:47.402559 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:35:47.402570 | orchestrator | 2026-03-17 00:35:47.402580 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-17 00:35:47.402591 | orchestrator | Tuesday 17 March 2026 00:35:31 +0000 (0:00:01.928) 0:06:41.402 ********* 2026-03-17 00:35:47.402603 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:35:47.402616 | orchestrator | 2026-03-17 00:35:47.402627 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-17 00:35:47.402638 | orchestrator | Tuesday 17 March 2026 00:35:32 +0000 (0:00:00.825) 0:06:42.228 ********* 2026-03-17 00:35:47.402663 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:47.402675 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:35:47.402745 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:35:47.402758 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:35:47.402769 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:35:47.402780 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:35:47.402790 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:35:47.402801 | orchestrator | 2026-03-17 00:35:47.402812 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-17 00:35:47.402823 | orchestrator | Tuesday 17 March 2026 00:35:33 +0000 (0:00:00.937) 0:06:43.165 ********* 2026-03-17 00:35:47.402834 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:47.402844 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:35:47.402855 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:35:47.402866 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:35:47.402877 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:35:47.402887 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:35:47.402898 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:35:47.402908 | orchestrator | 2026-03-17 00:35:47.402919 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-17 00:35:47.402930 | orchestrator | Tuesday 17 March 2026 00:35:34 +0000 (0:00:00.963) 0:06:44.128 ********* 2026-03-17 00:35:47.402941 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:47.402951 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:35:47.402962 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:35:47.402973 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:35:47.402983 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:35:47.402994 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:35:47.403004 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:35:47.403015 | orchestrator | 2026-03-17 00:35:47.403026 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-17 00:35:47.403055 | orchestrator | Tuesday 17 March 2026 00:35:35 +0000 (0:00:01.644) 0:06:45.773 ********* 2026-03-17 00:35:47.403066 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:35:47.403077 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:35:47.403088 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:35:47.403099 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:35:47.403109 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:35:47.403120 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:35:47.403131 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:35:47.403142 | orchestrator | 2026-03-17 00:35:47.403153 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-17 00:35:47.403164 | orchestrator | Tuesday 17 March 2026 00:35:37 +0000 (0:00:01.624) 0:06:47.397 ********* 2026-03-17 00:35:47.403174 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:47.403185 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:35:47.403196 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:35:47.403207 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:35:47.403217 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:35:47.403228 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:35:47.403239 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:35:47.403249 | orchestrator | 2026-03-17 00:35:47.403261 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-17 00:35:47.403271 | orchestrator | Tuesday 17 March 2026 00:35:38 +0000 (0:00:01.355) 0:06:48.753 ********* 2026-03-17 00:35:47.403282 | orchestrator | changed: [testbed-manager] 2026-03-17 00:35:47.403293 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:35:47.403303 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:35:47.403314 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:35:47.403325 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:35:47.403335 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:35:47.403346 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:35:47.403356 | orchestrator | 2026-03-17 00:35:47.403367 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-17 00:35:47.403391 | orchestrator | Tuesday 17 March 2026 00:35:40 +0000 (0:00:01.388) 0:06:50.142 ********* 2026-03-17 00:35:47.403409 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:35:47.403428 | orchestrator | 2026-03-17 00:35:47.403445 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-17 00:35:47.403464 | orchestrator | Tuesday 17 March 2026 00:35:41 +0000 (0:00:01.019) 0:06:51.161 ********* 2026-03-17 00:35:47.403481 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:47.403498 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:35:47.403509 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:35:47.403519 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:35:47.403530 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:35:47.403540 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:35:47.403551 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:35:47.403561 | orchestrator | 2026-03-17 00:35:47.403572 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-17 00:35:47.403582 | orchestrator | Tuesday 17 March 2026 00:35:42 +0000 (0:00:01.516) 0:06:52.678 ********* 2026-03-17 00:35:47.403593 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:47.403603 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:35:47.403614 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:35:47.403624 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:35:47.403635 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:35:47.403645 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:35:47.403671 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:35:47.403746 | orchestrator | 2026-03-17 00:35:47.403771 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-17 00:35:47.403793 | orchestrator | Tuesday 17 March 2026 00:35:43 +0000 (0:00:01.134) 0:06:53.812 ********* 2026-03-17 00:35:47.403813 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:47.403826 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:35:47.403837 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:35:47.403847 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:35:47.403858 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:35:47.403868 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:35:47.403879 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:35:47.403889 | orchestrator | 2026-03-17 00:35:47.403900 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-17 00:35:47.403911 | orchestrator | Tuesday 17 March 2026 00:35:45 +0000 (0:00:01.132) 0:06:54.945 ********* 2026-03-17 00:35:47.403921 | orchestrator | ok: [testbed-manager] 2026-03-17 00:35:47.403932 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:35:47.403942 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:35:47.403953 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:35:47.403963 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:35:47.403974 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:35:47.403984 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:35:47.403994 | orchestrator | 2026-03-17 00:35:47.404005 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-17 00:35:47.404016 | orchestrator | Tuesday 17 March 2026 00:35:46 +0000 (0:00:01.327) 0:06:56.272 ********* 2026-03-17 00:35:47.404026 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:35:47.404037 | orchestrator | 2026-03-17 00:35:47.404048 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-17 00:35:47.404059 | orchestrator | Tuesday 17 March 2026 00:35:47 +0000 (0:00:00.756) 0:06:57.029 ********* 2026-03-17 00:35:47.404069 | orchestrator | 2026-03-17 00:35:47.404080 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-17 00:35:47.404091 | orchestrator | Tuesday 17 March 2026 00:35:47 +0000 (0:00:00.035) 0:06:57.065 ********* 2026-03-17 00:35:47.404111 | orchestrator | 2026-03-17 00:35:47.404122 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-17 00:35:47.404133 | orchestrator | Tuesday 17 March 2026 00:35:47 +0000 (0:00:00.035) 0:06:57.100 ********* 2026-03-17 00:35:47.404143 | orchestrator | 2026-03-17 00:35:47.404154 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-17 00:35:47.404174 | orchestrator | Tuesday 17 March 2026 00:35:47 +0000 (0:00:00.038) 0:06:57.138 ********* 2026-03-17 00:36:14.439127 | orchestrator | 2026-03-17 00:36:14.439231 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-17 00:36:14.439245 | orchestrator | Tuesday 17 March 2026 00:35:47 +0000 (0:00:00.034) 0:06:57.173 ********* 2026-03-17 00:36:14.439254 | orchestrator | 2026-03-17 00:36:14.439262 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-17 00:36:14.439270 | orchestrator | Tuesday 17 March 2026 00:35:47 +0000 (0:00:00.035) 0:06:57.209 ********* 2026-03-17 00:36:14.439278 | orchestrator | 2026-03-17 00:36:14.439286 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-17 00:36:14.439295 | orchestrator | Tuesday 17 March 2026 00:35:47 +0000 (0:00:00.039) 0:06:57.248 ********* 2026-03-17 00:36:14.439302 | orchestrator | 2026-03-17 00:36:14.439310 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-17 00:36:14.439318 | orchestrator | Tuesday 17 March 2026 00:35:47 +0000 (0:00:00.035) 0:06:57.283 ********* 2026-03-17 00:36:14.439326 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:14.439335 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:14.439343 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:14.439351 | orchestrator | 2026-03-17 00:36:14.439359 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-17 00:36:14.439367 | orchestrator | Tuesday 17 March 2026 00:35:48 +0000 (0:00:01.231) 0:06:58.515 ********* 2026-03-17 00:36:14.439375 | orchestrator | changed: [testbed-manager] 2026-03-17 00:36:14.439384 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:36:14.439391 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:36:14.439399 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:36:14.439407 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:36:14.439415 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:36:14.439423 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:36:14.439430 | orchestrator | 2026-03-17 00:36:14.439438 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-17 00:36:14.439446 | orchestrator | Tuesday 17 March 2026 00:35:49 +0000 (0:00:01.358) 0:06:59.874 ********* 2026-03-17 00:36:14.439454 | orchestrator | changed: [testbed-manager] 2026-03-17 00:36:14.439462 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:36:14.439470 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:36:14.439492 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:36:14.439500 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:36:14.439508 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:36:14.439516 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:36:14.439524 | orchestrator | 2026-03-17 00:36:14.439532 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-17 00:36:14.439540 | orchestrator | Tuesday 17 March 2026 00:35:51 +0000 (0:00:01.334) 0:07:01.208 ********* 2026-03-17 00:36:14.439548 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:36:14.439556 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:36:14.439563 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:36:14.439571 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:36:14.439579 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:36:14.439587 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:36:14.439595 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:36:14.439603 | orchestrator | 2026-03-17 00:36:14.439612 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-17 00:36:14.439627 | orchestrator | Tuesday 17 March 2026 00:35:53 +0000 (0:00:02.180) 0:07:03.389 ********* 2026-03-17 00:36:14.439720 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:36:14.439738 | orchestrator | 2026-03-17 00:36:14.439769 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-17 00:36:14.439787 | orchestrator | Tuesday 17 March 2026 00:35:53 +0000 (0:00:00.110) 0:07:03.500 ********* 2026-03-17 00:36:14.439802 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:14.439818 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:36:14.439831 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:36:14.439841 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:36:14.439850 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:36:14.439859 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:36:14.439870 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:36:14.439883 | orchestrator | 2026-03-17 00:36:14.439897 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-17 00:36:14.439912 | orchestrator | Tuesday 17 March 2026 00:35:54 +0000 (0:00:01.018) 0:07:04.518 ********* 2026-03-17 00:36:14.439926 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:36:14.439940 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:36:14.439954 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:36:14.439968 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:36:14.439981 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:36:14.439993 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:36:14.440005 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:36:14.440018 | orchestrator | 2026-03-17 00:36:14.440032 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-17 00:36:14.440046 | orchestrator | Tuesday 17 March 2026 00:35:55 +0000 (0:00:00.500) 0:07:05.019 ********* 2026-03-17 00:36:14.440061 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:36:14.440076 | orchestrator | 2026-03-17 00:36:14.440090 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-17 00:36:14.440098 | orchestrator | Tuesday 17 March 2026 00:35:56 +0000 (0:00:01.028) 0:07:06.048 ********* 2026-03-17 00:36:14.440106 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:14.440114 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:14.440122 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:14.440129 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:14.440137 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:14.440144 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:14.440152 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:14.440160 | orchestrator | 2026-03-17 00:36:14.440168 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-17 00:36:14.440179 | orchestrator | Tuesday 17 March 2026 00:35:57 +0000 (0:00:00.898) 0:07:06.946 ********* 2026-03-17 00:36:14.440191 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-17 00:36:14.440227 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-17 00:36:14.440241 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-17 00:36:14.440255 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-17 00:36:14.440263 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-17 00:36:14.440271 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-17 00:36:14.440278 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-17 00:36:14.440286 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-17 00:36:14.440294 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-17 00:36:14.440302 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-17 00:36:14.440310 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-17 00:36:14.440317 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-17 00:36:14.440325 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-17 00:36:14.440344 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-17 00:36:14.440352 | orchestrator | 2026-03-17 00:36:14.440360 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-17 00:36:14.440368 | orchestrator | Tuesday 17 March 2026 00:35:59 +0000 (0:00:02.638) 0:07:09.585 ********* 2026-03-17 00:36:14.440376 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:36:14.440383 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:36:14.440391 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:36:14.440399 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:36:14.440406 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:36:14.440414 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:36:14.440422 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:36:14.440429 | orchestrator | 2026-03-17 00:36:14.440437 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-17 00:36:14.440445 | orchestrator | Tuesday 17 March 2026 00:36:00 +0000 (0:00:00.675) 0:07:10.260 ********* 2026-03-17 00:36:14.440455 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:36:14.440465 | orchestrator | 2026-03-17 00:36:14.440473 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-17 00:36:14.440480 | orchestrator | Tuesday 17 March 2026 00:36:01 +0000 (0:00:00.805) 0:07:11.066 ********* 2026-03-17 00:36:14.440488 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:14.440496 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:14.440504 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:14.440511 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:14.440519 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:14.440527 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:14.440534 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:14.440542 | orchestrator | 2026-03-17 00:36:14.440550 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-17 00:36:14.440558 | orchestrator | Tuesday 17 March 2026 00:36:01 +0000 (0:00:00.820) 0:07:11.887 ********* 2026-03-17 00:36:14.440565 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:14.440580 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:14.440588 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:14.440595 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:14.440603 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:14.440610 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:14.440618 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:14.440625 | orchestrator | 2026-03-17 00:36:14.440633 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-17 00:36:14.440641 | orchestrator | Tuesday 17 March 2026 00:36:03 +0000 (0:00:01.019) 0:07:12.906 ********* 2026-03-17 00:36:14.440676 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:36:14.440686 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:36:14.440693 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:36:14.440701 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:36:14.440709 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:36:14.440716 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:36:14.440724 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:36:14.440732 | orchestrator | 2026-03-17 00:36:14.440739 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-17 00:36:14.440747 | orchestrator | Tuesday 17 March 2026 00:36:03 +0000 (0:00:00.494) 0:07:13.401 ********* 2026-03-17 00:36:14.440755 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:14.440763 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:14.440770 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:14.440780 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:14.440793 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:14.440806 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:14.440828 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:14.440841 | orchestrator | 2026-03-17 00:36:14.440853 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-17 00:36:14.440866 | orchestrator | Tuesday 17 March 2026 00:36:05 +0000 (0:00:01.710) 0:07:15.111 ********* 2026-03-17 00:36:14.440879 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:36:14.440893 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:36:14.440907 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:36:14.440921 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:36:14.440933 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:36:14.440946 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:36:14.440959 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:36:14.440973 | orchestrator | 2026-03-17 00:36:14.440986 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-17 00:36:14.441001 | orchestrator | Tuesday 17 March 2026 00:36:05 +0000 (0:00:00.493) 0:07:15.605 ********* 2026-03-17 00:36:14.441015 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:14.441030 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:36:14.441042 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:36:14.441056 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:36:14.441064 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:36:14.441071 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:36:14.441088 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:36:47.951669 | orchestrator | 2026-03-17 00:36:47.951842 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-17 00:36:47.951862 | orchestrator | Tuesday 17 March 2026 00:36:14 +0000 (0:00:08.720) 0:07:24.325 ********* 2026-03-17 00:36:47.951874 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:47.951887 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:36:47.951899 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:36:47.951909 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:36:47.951920 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:36:47.951930 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:36:47.951943 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:36:47.951962 | orchestrator | 2026-03-17 00:36:47.951973 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-17 00:36:47.951984 | orchestrator | Tuesday 17 March 2026 00:36:16 +0000 (0:00:01.761) 0:07:26.087 ********* 2026-03-17 00:36:47.951995 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:47.952006 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:36:47.952017 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:36:47.952027 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:36:47.952038 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:36:47.952048 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:36:47.952059 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:36:47.952069 | orchestrator | 2026-03-17 00:36:47.952080 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-17 00:36:47.952091 | orchestrator | Tuesday 17 March 2026 00:36:17 +0000 (0:00:01.800) 0:07:27.887 ********* 2026-03-17 00:36:47.952102 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:47.952112 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:36:47.952123 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:36:47.952133 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:36:47.952144 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:36:47.952155 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:36:47.952168 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:36:47.952180 | orchestrator | 2026-03-17 00:36:47.952193 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-17 00:36:47.952205 | orchestrator | Tuesday 17 March 2026 00:36:19 +0000 (0:00:01.701) 0:07:29.588 ********* 2026-03-17 00:36:47.952217 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:47.952229 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:47.952241 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:47.952253 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:47.952288 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:47.952301 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:47.952313 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:47.952325 | orchestrator | 2026-03-17 00:36:47.952337 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-17 00:36:47.952350 | orchestrator | Tuesday 17 March 2026 00:36:20 +0000 (0:00:00.903) 0:07:30.492 ********* 2026-03-17 00:36:47.952362 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:36:47.952374 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:36:47.952387 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:36:47.952399 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:36:47.952412 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:36:47.952424 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:36:47.952436 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:36:47.952448 | orchestrator | 2026-03-17 00:36:47.952460 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-17 00:36:47.952473 | orchestrator | Tuesday 17 March 2026 00:36:21 +0000 (0:00:00.980) 0:07:31.473 ********* 2026-03-17 00:36:47.952485 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:36:47.952498 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:36:47.952510 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:36:47.952521 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:36:47.952531 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:36:47.952542 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:36:47.952552 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:36:47.952562 | orchestrator | 2026-03-17 00:36:47.952573 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-17 00:36:47.952586 | orchestrator | Tuesday 17 March 2026 00:36:22 +0000 (0:00:00.510) 0:07:31.983 ********* 2026-03-17 00:36:47.952603 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:47.952661 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:47.952672 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:47.952683 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:47.952694 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:47.952704 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:47.952714 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:47.952725 | orchestrator | 2026-03-17 00:36:47.952736 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-17 00:36:47.952746 | orchestrator | Tuesday 17 March 2026 00:36:22 +0000 (0:00:00.529) 0:07:32.513 ********* 2026-03-17 00:36:47.952757 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:47.952767 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:47.952778 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:47.952788 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:47.952799 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:47.952812 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:47.952829 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:47.952840 | orchestrator | 2026-03-17 00:36:47.952851 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-17 00:36:47.952862 | orchestrator | Tuesday 17 March 2026 00:36:23 +0000 (0:00:00.555) 0:07:33.069 ********* 2026-03-17 00:36:47.952872 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:47.952883 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:47.952893 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:47.952903 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:47.952914 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:47.952924 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:47.952934 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:47.952945 | orchestrator | 2026-03-17 00:36:47.952955 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-17 00:36:47.952966 | orchestrator | Tuesday 17 March 2026 00:36:23 +0000 (0:00:00.676) 0:07:33.745 ********* 2026-03-17 00:36:47.952977 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:47.952987 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:47.952997 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:47.953016 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:47.953027 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:47.953038 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:47.953048 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:47.953058 | orchestrator | 2026-03-17 00:36:47.953090 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-17 00:36:47.953101 | orchestrator | Tuesday 17 March 2026 00:36:29 +0000 (0:00:05.760) 0:07:39.506 ********* 2026-03-17 00:36:47.953112 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:36:47.953122 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:36:47.953133 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:36:47.953143 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:36:47.953154 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:36:47.953164 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:36:47.953175 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:36:47.953185 | orchestrator | 2026-03-17 00:36:47.953196 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-17 00:36:47.953206 | orchestrator | Tuesday 17 March 2026 00:36:30 +0000 (0:00:00.535) 0:07:40.041 ********* 2026-03-17 00:36:47.953219 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:36:47.953232 | orchestrator | 2026-03-17 00:36:47.953243 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-17 00:36:47.953253 | orchestrator | Tuesday 17 March 2026 00:36:31 +0000 (0:00:01.069) 0:07:41.110 ********* 2026-03-17 00:36:47.953264 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:47.953274 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:47.953285 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:47.953295 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:47.953306 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:47.953316 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:47.953326 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:47.953337 | orchestrator | 2026-03-17 00:36:47.953347 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-17 00:36:47.953358 | orchestrator | Tuesday 17 March 2026 00:36:33 +0000 (0:00:02.249) 0:07:43.360 ********* 2026-03-17 00:36:47.953368 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:47.953378 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:47.953389 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:47.953399 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:47.953410 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:47.953420 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:47.953431 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:47.953441 | orchestrator | 2026-03-17 00:36:47.953452 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-17 00:36:47.953462 | orchestrator | Tuesday 17 March 2026 00:36:34 +0000 (0:00:01.264) 0:07:44.625 ********* 2026-03-17 00:36:47.953473 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:47.953483 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:47.953494 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:47.953504 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:47.953514 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:47.953525 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:47.953535 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:47.953545 | orchestrator | 2026-03-17 00:36:47.953556 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-17 00:36:47.953566 | orchestrator | Tuesday 17 March 2026 00:36:35 +0000 (0:00:00.922) 0:07:45.548 ********* 2026-03-17 00:36:47.953583 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-17 00:36:47.953596 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-17 00:36:47.953652 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-17 00:36:47.953666 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-17 00:36:47.953676 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-17 00:36:47.953687 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-17 00:36:47.953698 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-17 00:36:47.953708 | orchestrator | 2026-03-17 00:36:47.953719 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-17 00:36:47.953730 | orchestrator | Tuesday 17 March 2026 00:36:37 +0000 (0:00:01.929) 0:07:47.477 ********* 2026-03-17 00:36:47.953740 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:36:47.953751 | orchestrator | 2026-03-17 00:36:47.953762 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-17 00:36:47.953773 | orchestrator | Tuesday 17 March 2026 00:36:38 +0000 (0:00:00.771) 0:07:48.249 ********* 2026-03-17 00:36:47.953783 | orchestrator | changed: [testbed-manager] 2026-03-17 00:36:47.953794 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:36:47.953805 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:36:47.953815 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:36:47.953826 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:36:47.953836 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:36:47.953847 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:36:47.953857 | orchestrator | 2026-03-17 00:36:47.953875 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-17 00:37:19.953415 | orchestrator | Tuesday 17 March 2026 00:36:47 +0000 (0:00:09.590) 0:07:57.840 ********* 2026-03-17 00:37:19.953498 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:37:19.953508 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:37:19.953515 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:37:19.953522 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:37:19.953529 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:37:19.953536 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:37:19.953543 | orchestrator | ok: [testbed-manager] 2026-03-17 00:37:19.953549 | orchestrator | 2026-03-17 00:37:19.953626 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-17 00:37:19.953635 | orchestrator | Tuesday 17 March 2026 00:36:50 +0000 (0:00:02.312) 0:08:00.152 ********* 2026-03-17 00:37:19.953642 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:37:19.953649 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:37:19.953655 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:37:19.953662 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:37:19.953669 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:37:19.953676 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:37:19.953682 | orchestrator | 2026-03-17 00:37:19.953689 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-17 00:37:19.953696 | orchestrator | Tuesday 17 March 2026 00:36:51 +0000 (0:00:01.427) 0:08:01.579 ********* 2026-03-17 00:37:19.953703 | orchestrator | changed: [testbed-manager] 2026-03-17 00:37:19.953710 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:37:19.953717 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:37:19.953724 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:37:19.953730 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:37:19.953755 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:37:19.953762 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:37:19.953769 | orchestrator | 2026-03-17 00:37:19.953775 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-17 00:37:19.953782 | orchestrator | 2026-03-17 00:37:19.953789 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-17 00:37:19.953795 | orchestrator | Tuesday 17 March 2026 00:36:53 +0000 (0:00:01.337) 0:08:02.916 ********* 2026-03-17 00:37:19.953802 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:37:19.953808 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:37:19.953814 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:37:19.953821 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:37:19.953827 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:37:19.953834 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:37:19.953840 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:37:19.953846 | orchestrator | 2026-03-17 00:37:19.953853 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-17 00:37:19.953860 | orchestrator | 2026-03-17 00:37:19.953866 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-17 00:37:19.953873 | orchestrator | Tuesday 17 March 2026 00:36:53 +0000 (0:00:00.652) 0:08:03.568 ********* 2026-03-17 00:37:19.953879 | orchestrator | changed: [testbed-manager] 2026-03-17 00:37:19.953886 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:37:19.953893 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:37:19.953899 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:37:19.953906 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:37:19.953912 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:37:19.953918 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:37:19.953925 | orchestrator | 2026-03-17 00:37:19.953931 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-17 00:37:19.953950 | orchestrator | Tuesday 17 March 2026 00:36:55 +0000 (0:00:01.387) 0:08:04.956 ********* 2026-03-17 00:37:19.953959 | orchestrator | ok: [testbed-manager] 2026-03-17 00:37:19.953966 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:37:19.953974 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:37:19.953981 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:37:19.953988 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:37:19.953996 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:37:19.954003 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:37:19.954011 | orchestrator | 2026-03-17 00:37:19.954059 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-17 00:37:19.954067 | orchestrator | Tuesday 17 March 2026 00:36:56 +0000 (0:00:01.391) 0:08:06.348 ********* 2026-03-17 00:37:19.954075 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:37:19.954082 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:37:19.954089 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:37:19.954096 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:37:19.954102 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:37:19.954109 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:37:19.954115 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:37:19.954122 | orchestrator | 2026-03-17 00:37:19.954128 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-17 00:37:19.954135 | orchestrator | Tuesday 17 March 2026 00:36:56 +0000 (0:00:00.492) 0:08:06.841 ********* 2026-03-17 00:37:19.954142 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:37:19.954151 | orchestrator | 2026-03-17 00:37:19.954157 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-17 00:37:19.954164 | orchestrator | Tuesday 17 March 2026 00:36:57 +0000 (0:00:00.979) 0:08:07.820 ********* 2026-03-17 00:37:19.954172 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:37:19.954188 | orchestrator | 2026-03-17 00:37:19.954195 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-17 00:37:19.954202 | orchestrator | Tuesday 17 March 2026 00:36:58 +0000 (0:00:00.781) 0:08:08.602 ********* 2026-03-17 00:37:19.954208 | orchestrator | changed: [testbed-manager] 2026-03-17 00:37:19.954215 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:37:19.954221 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:37:19.954228 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:37:19.954235 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:37:19.954241 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:37:19.954248 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:37:19.954254 | orchestrator | 2026-03-17 00:37:19.954274 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-17 00:37:19.954281 | orchestrator | Tuesday 17 March 2026 00:37:08 +0000 (0:00:09.670) 0:08:18.273 ********* 2026-03-17 00:37:19.954288 | orchestrator | changed: [testbed-manager] 2026-03-17 00:37:19.954295 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:37:19.954301 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:37:19.954308 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:37:19.954314 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:37:19.954321 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:37:19.954327 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:37:19.954334 | orchestrator | 2026-03-17 00:37:19.954341 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-17 00:37:19.954347 | orchestrator | Tuesday 17 March 2026 00:37:09 +0000 (0:00:01.109) 0:08:19.382 ********* 2026-03-17 00:37:19.954354 | orchestrator | changed: [testbed-manager] 2026-03-17 00:37:19.954360 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:37:19.954367 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:37:19.954373 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:37:19.954380 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:37:19.954386 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:37:19.954393 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:37:19.954399 | orchestrator | 2026-03-17 00:37:19.954406 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-17 00:37:19.954412 | orchestrator | Tuesday 17 March 2026 00:37:10 +0000 (0:00:01.397) 0:08:20.780 ********* 2026-03-17 00:37:19.954419 | orchestrator | changed: [testbed-manager] 2026-03-17 00:37:19.954426 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:37:19.954432 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:37:19.954439 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:37:19.954445 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:37:19.954452 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:37:19.954458 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:37:19.954465 | orchestrator | 2026-03-17 00:37:19.954471 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-17 00:37:19.954478 | orchestrator | Tuesday 17 March 2026 00:37:12 +0000 (0:00:01.865) 0:08:22.645 ********* 2026-03-17 00:37:19.954484 | orchestrator | changed: [testbed-manager] 2026-03-17 00:37:19.954491 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:37:19.954497 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:37:19.954504 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:37:19.954510 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:37:19.954517 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:37:19.954524 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:37:19.954530 | orchestrator | 2026-03-17 00:37:19.954537 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-17 00:37:19.954544 | orchestrator | Tuesday 17 March 2026 00:37:13 +0000 (0:00:01.254) 0:08:23.900 ********* 2026-03-17 00:37:19.954550 | orchestrator | changed: [testbed-manager] 2026-03-17 00:37:19.954580 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:37:19.954591 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:37:19.954611 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:37:19.954623 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:37:19.954634 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:37:19.954645 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:37:19.954653 | orchestrator | 2026-03-17 00:37:19.954660 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-17 00:37:19.954667 | orchestrator | 2026-03-17 00:37:19.954678 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-17 00:37:19.954685 | orchestrator | Tuesday 17 March 2026 00:37:15 +0000 (0:00:01.140) 0:08:25.041 ********* 2026-03-17 00:37:19.954692 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:37:19.954699 | orchestrator | 2026-03-17 00:37:19.954705 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-17 00:37:19.954712 | orchestrator | Tuesday 17 March 2026 00:37:15 +0000 (0:00:00.790) 0:08:25.831 ********* 2026-03-17 00:37:19.954718 | orchestrator | ok: [testbed-manager] 2026-03-17 00:37:19.954725 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:37:19.954731 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:37:19.954738 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:37:19.954744 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:37:19.954751 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:37:19.954757 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:37:19.954764 | orchestrator | 2026-03-17 00:37:19.954770 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-17 00:37:19.954777 | orchestrator | Tuesday 17 March 2026 00:37:16 +0000 (0:00:01.069) 0:08:26.900 ********* 2026-03-17 00:37:19.954784 | orchestrator | changed: [testbed-manager] 2026-03-17 00:37:19.954790 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:37:19.954797 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:37:19.954803 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:37:19.954810 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:37:19.954816 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:37:19.954823 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:37:19.954829 | orchestrator | 2026-03-17 00:37:19.954836 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-17 00:37:19.954843 | orchestrator | Tuesday 17 March 2026 00:37:18 +0000 (0:00:01.148) 0:08:28.049 ********* 2026-03-17 00:37:19.954849 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:37:19.954856 | orchestrator | 2026-03-17 00:37:19.954863 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-17 00:37:19.954869 | orchestrator | Tuesday 17 March 2026 00:37:19 +0000 (0:00:00.937) 0:08:28.986 ********* 2026-03-17 00:37:19.954876 | orchestrator | ok: [testbed-manager] 2026-03-17 00:37:19.954882 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:37:19.954889 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:37:19.954895 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:37:19.954902 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:37:19.954908 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:37:19.954915 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:37:19.954921 | orchestrator | 2026-03-17 00:37:19.954933 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-17 00:37:21.721528 | orchestrator | Tuesday 17 March 2026 00:37:19 +0000 (0:00:00.855) 0:08:29.842 ********* 2026-03-17 00:37:21.721687 | orchestrator | changed: [testbed-manager] 2026-03-17 00:37:21.721706 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:37:21.721718 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:37:21.721729 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:37:21.721740 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:37:21.721751 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:37:21.721761 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:37:21.721773 | orchestrator | 2026-03-17 00:37:21.721815 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:37:21.721828 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-17 00:37:21.721841 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-17 00:37:21.721852 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-17 00:37:21.721863 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-17 00:37:21.721874 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-03-17 00:37:21.721885 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-17 00:37:21.721896 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-17 00:37:21.721906 | orchestrator | 2026-03-17 00:37:21.721917 | orchestrator | 2026-03-17 00:37:21.721928 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:37:21.721940 | orchestrator | Tuesday 17 March 2026 00:37:21 +0000 (0:00:01.131) 0:08:30.973 ********* 2026-03-17 00:37:21.721951 | orchestrator | =============================================================================== 2026-03-17 00:37:21.721961 | orchestrator | osism.commons.packages : Install required packages --------------------- 79.33s 2026-03-17 00:37:21.721972 | orchestrator | osism.commons.packages : Download required packages -------------------- 47.06s 2026-03-17 00:37:21.721983 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.12s 2026-03-17 00:37:21.721994 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.79s 2026-03-17 00:37:21.722004 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.57s 2026-03-17 00:37:21.722115 | orchestrator | osism.services.docker : Install containerd package --------------------- 11.14s 2026-03-17 00:37:21.722135 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 10.73s 2026-03-17 00:37:21.722147 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 10.72s 2026-03-17 00:37:21.722160 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 9.69s 2026-03-17 00:37:21.722173 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.67s 2026-03-17 00:37:21.722185 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.59s 2026-03-17 00:37:21.722197 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.54s 2026-03-17 00:37:21.722209 | orchestrator | osism.services.rng : Install rng package -------------------------------- 9.39s 2026-03-17 00:37:21.722221 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.88s 2026-03-17 00:37:21.722233 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.77s 2026-03-17 00:37:21.722246 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.72s 2026-03-17 00:37:21.722258 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.51s 2026-03-17 00:37:21.722270 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.98s 2026-03-17 00:37:21.722282 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.76s 2026-03-17 00:37:21.722294 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 5.21s 2026-03-17 00:37:22.012464 | orchestrator | + osism apply fail2ban 2026-03-17 00:37:34.409690 | orchestrator | 2026-03-17 00:37:34 | INFO  | Task 0e5349b4-bf92-4771-8afd-f36e49b271a3 (fail2ban) was prepared for execution. 2026-03-17 00:37:34.409787 | orchestrator | 2026-03-17 00:37:34 | INFO  | It takes a moment until task 0e5349b4-bf92-4771-8afd-f36e49b271a3 (fail2ban) has been started and output is visible here. 2026-03-17 00:37:57.131607 | orchestrator | 2026-03-17 00:37:57.131720 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-17 00:37:57.131737 | orchestrator | 2026-03-17 00:37:57.131749 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-17 00:37:57.131760 | orchestrator | Tuesday 17 March 2026 00:37:38 +0000 (0:00:00.249) 0:00:00.249 ********* 2026-03-17 00:37:57.131772 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:37:57.131786 | orchestrator | 2026-03-17 00:37:57.131797 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-17 00:37:57.131808 | orchestrator | Tuesday 17 March 2026 00:37:39 +0000 (0:00:01.103) 0:00:01.353 ********* 2026-03-17 00:37:57.131819 | orchestrator | changed: [testbed-manager] 2026-03-17 00:37:57.131831 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:37:57.131841 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:37:57.131852 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:37:57.131862 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:37:57.131873 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:37:57.131883 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:37:57.131894 | orchestrator | 2026-03-17 00:37:57.131906 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-17 00:37:57.131917 | orchestrator | Tuesday 17 March 2026 00:37:52 +0000 (0:00:12.546) 0:00:13.899 ********* 2026-03-17 00:37:57.131928 | orchestrator | changed: [testbed-manager] 2026-03-17 00:37:57.131938 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:37:57.131949 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:37:57.131959 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:37:57.131970 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:37:57.131980 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:37:57.131991 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:37:57.132001 | orchestrator | 2026-03-17 00:37:57.132012 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-17 00:37:57.132023 | orchestrator | Tuesday 17 March 2026 00:37:53 +0000 (0:00:01.488) 0:00:15.388 ********* 2026-03-17 00:37:57.132034 | orchestrator | ok: [testbed-manager] 2026-03-17 00:37:57.132045 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:37:57.132056 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:37:57.132066 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:37:57.132077 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:37:57.132087 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:37:57.132100 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:37:57.132113 | orchestrator | 2026-03-17 00:37:57.132125 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-17 00:37:57.132138 | orchestrator | Tuesday 17 March 2026 00:37:55 +0000 (0:00:01.410) 0:00:16.799 ********* 2026-03-17 00:37:57.132150 | orchestrator | changed: [testbed-manager] 2026-03-17 00:37:57.132162 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:37:57.132174 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:37:57.132186 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:37:57.132198 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:37:57.132210 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:37:57.132222 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:37:57.132234 | orchestrator | 2026-03-17 00:37:57.132246 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:37:57.132260 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:37:57.132301 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:37:57.132314 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:37:57.132327 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:37:57.132339 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:37:57.132351 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:37:57.132363 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:37:57.132381 | orchestrator | 2026-03-17 00:37:57.132398 | orchestrator | 2026-03-17 00:37:57.132417 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:37:57.132437 | orchestrator | Tuesday 17 March 2026 00:37:56 +0000 (0:00:01.471) 0:00:18.270 ********* 2026-03-17 00:37:57.132457 | orchestrator | =============================================================================== 2026-03-17 00:37:57.132476 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 12.55s 2026-03-17 00:37:57.132531 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.49s 2026-03-17 00:37:57.132550 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.47s 2026-03-17 00:37:57.132568 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.41s 2026-03-17 00:37:57.132586 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.10s 2026-03-17 00:37:57.330687 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-17 00:37:57.330791 | orchestrator | + osism apply network 2026-03-17 00:38:09.269940 | orchestrator | 2026-03-17 00:38:09 | INFO  | Task 224966ea-b75e-449f-ae47-961f8f7eb016 (network) was prepared for execution. 2026-03-17 00:38:09.270110 | orchestrator | 2026-03-17 00:38:09 | INFO  | It takes a moment until task 224966ea-b75e-449f-ae47-961f8f7eb016 (network) has been started and output is visible here. 2026-03-17 00:38:36.104434 | orchestrator | 2026-03-17 00:38:36.104581 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-17 00:38:36.104599 | orchestrator | 2026-03-17 00:38:36.104611 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-17 00:38:36.104623 | orchestrator | Tuesday 17 March 2026 00:38:13 +0000 (0:00:00.232) 0:00:00.232 ********* 2026-03-17 00:38:36.104634 | orchestrator | ok: [testbed-manager] 2026-03-17 00:38:36.104646 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:38:36.104658 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:38:36.104668 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:38:36.104679 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:38:36.104690 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:38:36.104700 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:38:36.104711 | orchestrator | 2026-03-17 00:38:36.104722 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-17 00:38:36.104733 | orchestrator | Tuesday 17 March 2026 00:38:13 +0000 (0:00:00.508) 0:00:00.741 ********* 2026-03-17 00:38:36.104745 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:38:36.104759 | orchestrator | 2026-03-17 00:38:36.104769 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-17 00:38:36.104780 | orchestrator | Tuesday 17 March 2026 00:38:14 +0000 (0:00:00.858) 0:00:01.599 ********* 2026-03-17 00:38:36.104876 | orchestrator | ok: [testbed-manager] 2026-03-17 00:38:36.104889 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:38:36.104900 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:38:36.104916 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:38:36.104935 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:38:36.104946 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:38:36.104956 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:38:36.104967 | orchestrator | 2026-03-17 00:38:36.104980 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-17 00:38:36.104992 | orchestrator | Tuesday 17 March 2026 00:38:16 +0000 (0:00:02.211) 0:00:03.810 ********* 2026-03-17 00:38:36.105004 | orchestrator | ok: [testbed-manager] 2026-03-17 00:38:36.105016 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:38:36.105028 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:38:36.105042 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:38:36.105054 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:38:36.105066 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:38:36.105078 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:38:36.105090 | orchestrator | 2026-03-17 00:38:36.105102 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-17 00:38:36.105114 | orchestrator | Tuesday 17 March 2026 00:38:18 +0000 (0:00:01.713) 0:00:05.524 ********* 2026-03-17 00:38:36.105126 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-17 00:38:36.105139 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-17 00:38:36.105151 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-17 00:38:36.105164 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-17 00:38:36.105176 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-17 00:38:36.105188 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-17 00:38:36.105200 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-17 00:38:36.105211 | orchestrator | 2026-03-17 00:38:36.105239 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-17 00:38:36.105251 | orchestrator | Tuesday 17 March 2026 00:38:19 +0000 (0:00:00.970) 0:00:06.494 ********* 2026-03-17 00:38:36.105266 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 00:38:36.105278 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-17 00:38:36.105288 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-17 00:38:36.105299 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 00:38:36.105309 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-17 00:38:36.105320 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-17 00:38:36.105330 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-17 00:38:36.105340 | orchestrator | 2026-03-17 00:38:36.105351 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-17 00:38:36.105362 | orchestrator | Tuesday 17 March 2026 00:38:22 +0000 (0:00:03.131) 0:00:09.625 ********* 2026-03-17 00:38:36.105372 | orchestrator | changed: [testbed-manager] 2026-03-17 00:38:36.105383 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:38:36.105393 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:38:36.105403 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:38:36.105414 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:38:36.105424 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:38:36.105471 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:38:36.105483 | orchestrator | 2026-03-17 00:38:36.105494 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-17 00:38:36.105504 | orchestrator | Tuesday 17 March 2026 00:38:23 +0000 (0:00:01.428) 0:00:11.054 ********* 2026-03-17 00:38:36.105515 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 00:38:36.105526 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 00:38:36.105536 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-17 00:38:36.105547 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-17 00:38:36.105557 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-17 00:38:36.105576 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-17 00:38:36.105587 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-17 00:38:36.105598 | orchestrator | 2026-03-17 00:38:36.105608 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-17 00:38:36.105619 | orchestrator | Tuesday 17 March 2026 00:38:25 +0000 (0:00:01.406) 0:00:12.460 ********* 2026-03-17 00:38:36.105630 | orchestrator | ok: [testbed-manager] 2026-03-17 00:38:36.105640 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:38:36.105651 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:38:36.105661 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:38:36.105672 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:38:36.105682 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:38:36.105693 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:38:36.105703 | orchestrator | 2026-03-17 00:38:36.105714 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-17 00:38:36.105744 | orchestrator | Tuesday 17 March 2026 00:38:26 +0000 (0:00:01.016) 0:00:13.477 ********* 2026-03-17 00:38:36.105756 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:38:36.105766 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:38:36.105777 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:38:36.105787 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:38:36.105798 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:38:36.105808 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:38:36.105819 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:38:36.105830 | orchestrator | 2026-03-17 00:38:36.105841 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-17 00:38:36.105851 | orchestrator | Tuesday 17 March 2026 00:38:26 +0000 (0:00:00.535) 0:00:14.013 ********* 2026-03-17 00:38:36.105862 | orchestrator | ok: [testbed-manager] 2026-03-17 00:38:36.105872 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:38:36.105883 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:38:36.105894 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:38:36.105904 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:38:36.105914 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:38:36.105925 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:38:36.105935 | orchestrator | 2026-03-17 00:38:36.105946 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-17 00:38:36.105957 | orchestrator | Tuesday 17 March 2026 00:38:29 +0000 (0:00:02.271) 0:00:16.284 ********* 2026-03-17 00:38:36.105967 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:38:36.105978 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:38:36.105988 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:38:36.105999 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:38:36.106009 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:38:36.106085 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:38:36.106098 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-03-17 00:38:36.106110 | orchestrator | 2026-03-17 00:38:36.106121 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-17 00:38:36.106132 | orchestrator | Tuesday 17 March 2026 00:38:30 +0000 (0:00:00.885) 0:00:17.169 ********* 2026-03-17 00:38:36.106143 | orchestrator | ok: [testbed-manager] 2026-03-17 00:38:36.106154 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:38:36.106164 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:38:36.106184 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:38:36.106195 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:38:36.106205 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:38:36.106216 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:38:36.106226 | orchestrator | 2026-03-17 00:38:36.106237 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-17 00:38:36.106247 | orchestrator | Tuesday 17 March 2026 00:38:31 +0000 (0:00:01.716) 0:00:18.885 ********* 2026-03-17 00:38:36.106259 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:38:36.106279 | orchestrator | 2026-03-17 00:38:36.106290 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-17 00:38:36.106300 | orchestrator | Tuesday 17 March 2026 00:38:32 +0000 (0:00:01.215) 0:00:20.100 ********* 2026-03-17 00:38:36.106311 | orchestrator | ok: [testbed-manager] 2026-03-17 00:38:36.106322 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:38:36.106332 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:38:36.106343 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:38:36.106353 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:38:36.106370 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:38:36.106381 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:38:36.106391 | orchestrator | 2026-03-17 00:38:36.106402 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-17 00:38:36.106413 | orchestrator | Tuesday 17 March 2026 00:38:34 +0000 (0:00:01.241) 0:00:21.342 ********* 2026-03-17 00:38:36.106423 | orchestrator | ok: [testbed-manager] 2026-03-17 00:38:36.106434 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:38:36.106462 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:38:36.106473 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:38:36.106483 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:38:36.106494 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:38:36.106504 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:38:36.106515 | orchestrator | 2026-03-17 00:38:36.106525 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-17 00:38:36.106536 | orchestrator | Tuesday 17 March 2026 00:38:34 +0000 (0:00:00.680) 0:00:22.023 ********* 2026-03-17 00:38:36.106547 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-17 00:38:36.106558 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-17 00:38:36.106569 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-17 00:38:36.106579 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-17 00:38:36.106590 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-17 00:38:36.106600 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-17 00:38:36.106611 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-17 00:38:36.106621 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-17 00:38:36.106632 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-17 00:38:36.106642 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-17 00:38:36.106653 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-17 00:38:36.106663 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-17 00:38:36.106674 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-17 00:38:36.106685 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-17 00:38:36.106695 | orchestrator | 2026-03-17 00:38:36.106715 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-17 00:38:51.036265 | orchestrator | Tuesday 17 March 2026 00:38:36 +0000 (0:00:01.181) 0:00:23.205 ********* 2026-03-17 00:38:51.036372 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:38:51.036386 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:38:51.036395 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:38:51.036402 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:38:51.036410 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:38:51.036459 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:38:51.036468 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:38:51.036475 | orchestrator | 2026-03-17 00:38:51.036483 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-17 00:38:51.036510 | orchestrator | Tuesday 17 March 2026 00:38:36 +0000 (0:00:00.583) 0:00:23.788 ********* 2026-03-17 00:38:51.036520 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-node-0, testbed-manager, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:38:51.036530 | orchestrator | 2026-03-17 00:38:51.036537 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-17 00:38:51.036544 | orchestrator | Tuesday 17 March 2026 00:38:40 +0000 (0:00:03.995) 0:00:27.784 ********* 2026-03-17 00:38:51.036554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-17 00:38:51.036564 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-17 00:38:51.036580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-17 00:38:51.036594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-17 00:38:51.036605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-17 00:38:51.036633 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-17 00:38:51.036647 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-17 00:38:51.036659 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-17 00:38:51.036671 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-17 00:38:51.036683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-17 00:38:51.036702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-17 00:38:51.036734 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-17 00:38:51.036758 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-17 00:38:51.036772 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-17 00:38:51.036781 | orchestrator | 2026-03-17 00:38:51.036789 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-17 00:38:51.036798 | orchestrator | Tuesday 17 March 2026 00:38:45 +0000 (0:00:05.289) 0:00:33.074 ********* 2026-03-17 00:38:51.036806 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-17 00:38:51.036813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-17 00:38:51.036821 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-17 00:38:51.036830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-17 00:38:51.036839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-17 00:38:51.036847 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-17 00:38:51.036861 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-17 00:38:51.036870 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-17 00:38:51.036879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-17 00:38:51.036888 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-17 00:38:51.036897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-17 00:38:51.036911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-17 00:38:51.036929 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-17 00:38:56.103642 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-17 00:38:56.103751 | orchestrator | 2026-03-17 00:38:56.103768 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-17 00:38:56.103781 | orchestrator | Tuesday 17 March 2026 00:38:51 +0000 (0:00:05.058) 0:00:38.133 ********* 2026-03-17 00:38:56.103794 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:38:56.103806 | orchestrator | 2026-03-17 00:38:56.103817 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-17 00:38:56.103828 | orchestrator | Tuesday 17 March 2026 00:38:52 +0000 (0:00:01.087) 0:00:39.220 ********* 2026-03-17 00:38:56.103838 | orchestrator | ok: [testbed-manager] 2026-03-17 00:38:56.103850 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:38:56.103861 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:38:56.103871 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:38:56.103881 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:38:56.103892 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:38:56.103902 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:38:56.103913 | orchestrator | 2026-03-17 00:38:56.103923 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-17 00:38:56.103934 | orchestrator | Tuesday 17 March 2026 00:38:53 +0000 (0:00:01.012) 0:00:40.233 ********* 2026-03-17 00:38:56.103945 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-17 00:38:56.103957 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-17 00:38:56.103968 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-17 00:38:56.103978 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-17 00:38:56.103989 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:38:56.104001 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-17 00:38:56.104011 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-17 00:38:56.104022 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-17 00:38:56.104032 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-17 00:38:56.104043 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:38:56.104054 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-17 00:38:56.104064 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-17 00:38:56.104091 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-17 00:38:56.104103 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-17 00:38:56.104114 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:38:56.104144 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-17 00:38:56.104156 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-17 00:38:56.104166 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-17 00:38:56.104179 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-17 00:38:56.104191 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:38:56.104204 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-17 00:38:56.104216 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-17 00:38:56.104228 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-17 00:38:56.104241 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-17 00:38:56.104253 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-17 00:38:56.104265 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-17 00:38:56.104277 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-17 00:38:56.104289 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-17 00:38:56.104301 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:38:56.104313 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:38:56.104326 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-17 00:38:56.104339 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-17 00:38:56.104351 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-17 00:38:56.104363 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-17 00:38:56.104375 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:38:56.104388 | orchestrator | 2026-03-17 00:38:56.104400 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-17 00:38:56.104455 | orchestrator | Tuesday 17 March 2026 00:38:54 +0000 (0:00:01.662) 0:00:41.895 ********* 2026-03-17 00:38:56.104469 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:38:56.104480 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:38:56.104490 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:38:56.104501 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:38:56.104512 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:38:56.104522 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:38:56.104533 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:38:56.104544 | orchestrator | 2026-03-17 00:38:56.104555 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-17 00:38:56.104566 | orchestrator | Tuesday 17 March 2026 00:38:55 +0000 (0:00:00.520) 0:00:42.416 ********* 2026-03-17 00:38:56.104577 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:38:56.104587 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:38:56.104598 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:38:56.104608 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:38:56.104619 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:38:56.104630 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:38:56.104641 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:38:56.104651 | orchestrator | 2026-03-17 00:38:56.104662 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:38:56.104674 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-17 00:38:56.104686 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-17 00:38:56.104704 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-17 00:38:56.104715 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-17 00:38:56.104726 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-17 00:38:56.104736 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-17 00:38:56.104747 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-17 00:38:56.104758 | orchestrator | 2026-03-17 00:38:56.104769 | orchestrator | 2026-03-17 00:38:56.104779 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:38:56.104790 | orchestrator | Tuesday 17 March 2026 00:38:55 +0000 (0:00:00.565) 0:00:42.981 ********* 2026-03-17 00:38:56.104801 | orchestrator | =============================================================================== 2026-03-17 00:38:56.104817 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.29s 2026-03-17 00:38:56.104828 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.06s 2026-03-17 00:38:56.104839 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.00s 2026-03-17 00:38:56.104849 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.13s 2026-03-17 00:38:56.104860 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.27s 2026-03-17 00:38:56.104870 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.21s 2026-03-17 00:38:56.104881 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.72s 2026-03-17 00:38:56.104892 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.71s 2026-03-17 00:38:56.104902 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.66s 2026-03-17 00:38:56.104913 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.43s 2026-03-17 00:38:56.104924 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.41s 2026-03-17 00:38:56.104934 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.24s 2026-03-17 00:38:56.104945 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.22s 2026-03-17 00:38:56.104956 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.18s 2026-03-17 00:38:56.104966 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.09s 2026-03-17 00:38:56.104977 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.02s 2026-03-17 00:38:56.104987 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.01s 2026-03-17 00:38:56.104998 | orchestrator | osism.commons.network : Create required directories --------------------- 0.97s 2026-03-17 00:38:56.105009 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.89s 2026-03-17 00:38:56.105019 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 0.86s 2026-03-17 00:38:56.298132 | orchestrator | + osism apply wireguard 2026-03-17 00:39:08.070532 | orchestrator | 2026-03-17 00:39:08 | INFO  | Task f9568b0b-72ff-4b36-aa8e-c81b616bbae4 (wireguard) was prepared for execution. 2026-03-17 00:39:08.070656 | orchestrator | 2026-03-17 00:39:08 | INFO  | It takes a moment until task f9568b0b-72ff-4b36-aa8e-c81b616bbae4 (wireguard) has been started and output is visible here. 2026-03-17 00:39:25.454361 | orchestrator | 2026-03-17 00:39:25.454566 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-17 00:39:25.454612 | orchestrator | 2026-03-17 00:39:25.454626 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-17 00:39:25.454638 | orchestrator | Tuesday 17 March 2026 00:39:12 +0000 (0:00:00.162) 0:00:00.162 ********* 2026-03-17 00:39:25.454648 | orchestrator | ok: [testbed-manager] 2026-03-17 00:39:25.454660 | orchestrator | 2026-03-17 00:39:25.454671 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-17 00:39:25.454685 | orchestrator | Tuesday 17 March 2026 00:39:13 +0000 (0:00:01.190) 0:00:01.353 ********* 2026-03-17 00:39:25.454704 | orchestrator | changed: [testbed-manager] 2026-03-17 00:39:25.454723 | orchestrator | 2026-03-17 00:39:25.454747 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-17 00:39:25.454767 | orchestrator | Tuesday 17 March 2026 00:39:18 +0000 (0:00:04.984) 0:00:06.337 ********* 2026-03-17 00:39:25.454785 | orchestrator | changed: [testbed-manager] 2026-03-17 00:39:25.454803 | orchestrator | 2026-03-17 00:39:25.454815 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-17 00:39:25.454829 | orchestrator | Tuesday 17 March 2026 00:39:18 +0000 (0:00:00.491) 0:00:06.829 ********* 2026-03-17 00:39:25.454841 | orchestrator | changed: [testbed-manager] 2026-03-17 00:39:25.454853 | orchestrator | 2026-03-17 00:39:25.454866 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-17 00:39:25.454879 | orchestrator | Tuesday 17 March 2026 00:39:19 +0000 (0:00:00.413) 0:00:07.242 ********* 2026-03-17 00:39:25.454890 | orchestrator | ok: [testbed-manager] 2026-03-17 00:39:25.454902 | orchestrator | 2026-03-17 00:39:25.454914 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-17 00:39:25.454927 | orchestrator | Tuesday 17 March 2026 00:39:19 +0000 (0:00:00.662) 0:00:07.904 ********* 2026-03-17 00:39:25.454939 | orchestrator | ok: [testbed-manager] 2026-03-17 00:39:25.454951 | orchestrator | 2026-03-17 00:39:25.454963 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-17 00:39:25.454975 | orchestrator | Tuesday 17 March 2026 00:39:20 +0000 (0:00:00.411) 0:00:08.316 ********* 2026-03-17 00:39:25.454987 | orchestrator | ok: [testbed-manager] 2026-03-17 00:39:25.454999 | orchestrator | 2026-03-17 00:39:25.455012 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-17 00:39:25.455024 | orchestrator | Tuesday 17 March 2026 00:39:20 +0000 (0:00:00.405) 0:00:08.721 ********* 2026-03-17 00:39:25.455036 | orchestrator | changed: [testbed-manager] 2026-03-17 00:39:25.455048 | orchestrator | 2026-03-17 00:39:25.455061 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-17 00:39:25.455073 | orchestrator | Tuesday 17 March 2026 00:39:21 +0000 (0:00:01.140) 0:00:09.862 ********* 2026-03-17 00:39:25.455086 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-17 00:39:25.455098 | orchestrator | changed: [testbed-manager] 2026-03-17 00:39:25.455110 | orchestrator | 2026-03-17 00:39:25.455122 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-17 00:39:25.455134 | orchestrator | Tuesday 17 March 2026 00:39:22 +0000 (0:00:00.917) 0:00:10.780 ********* 2026-03-17 00:39:25.455147 | orchestrator | changed: [testbed-manager] 2026-03-17 00:39:25.455160 | orchestrator | 2026-03-17 00:39:25.455174 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-17 00:39:25.455185 | orchestrator | Tuesday 17 March 2026 00:39:24 +0000 (0:00:01.558) 0:00:12.339 ********* 2026-03-17 00:39:25.455196 | orchestrator | changed: [testbed-manager] 2026-03-17 00:39:25.455206 | orchestrator | 2026-03-17 00:39:25.455217 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:39:25.455228 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:39:25.455241 | orchestrator | 2026-03-17 00:39:25.455252 | orchestrator | 2026-03-17 00:39:25.455263 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:39:25.455283 | orchestrator | Tuesday 17 March 2026 00:39:25 +0000 (0:00:00.910) 0:00:13.249 ********* 2026-03-17 00:39:25.455294 | orchestrator | =============================================================================== 2026-03-17 00:39:25.455305 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 4.98s 2026-03-17 00:39:25.455316 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.56s 2026-03-17 00:39:25.455326 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.19s 2026-03-17 00:39:25.455337 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.14s 2026-03-17 00:39:25.455348 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.92s 2026-03-17 00:39:25.455358 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.91s 2026-03-17 00:39:25.455369 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.66s 2026-03-17 00:39:25.455403 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.49s 2026-03-17 00:39:25.455414 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.41s 2026-03-17 00:39:25.455424 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.41s 2026-03-17 00:39:25.455435 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.41s 2026-03-17 00:39:25.725400 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-17 00:39:25.753761 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-17 00:39:25.753861 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-17 00:39:25.833564 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 176 0 --:--:-- --:--:-- --:--:-- 177 2026-03-17 00:39:25.846779 | orchestrator | + osism apply --environment custom workarounds 2026-03-17 00:39:27.709818 | orchestrator | 2026-03-17 00:39:27 | INFO  | Trying to run play workarounds in environment custom 2026-03-17 00:39:37.976934 | orchestrator | 2026-03-17 00:39:37 | INFO  | Task 48556cbc-58f3-478c-956a-2501dc7c235e (workarounds) was prepared for execution. 2026-03-17 00:39:37.977057 | orchestrator | 2026-03-17 00:39:37 | INFO  | It takes a moment until task 48556cbc-58f3-478c-956a-2501dc7c235e (workarounds) has been started and output is visible here. 2026-03-17 00:40:01.537295 | orchestrator | 2026-03-17 00:40:01.537486 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 00:40:01.537510 | orchestrator | 2026-03-17 00:40:01.537526 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-17 00:40:01.537543 | orchestrator | Tuesday 17 March 2026 00:39:41 +0000 (0:00:00.124) 0:00:00.124 ********* 2026-03-17 00:40:01.537559 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-17 00:40:01.537575 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-17 00:40:01.537591 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-17 00:40:01.537606 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-17 00:40:01.537621 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-17 00:40:01.537636 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-17 00:40:01.537651 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-17 00:40:01.537667 | orchestrator | 2026-03-17 00:40:01.537682 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-17 00:40:01.537698 | orchestrator | 2026-03-17 00:40:01.537715 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-17 00:40:01.537731 | orchestrator | Tuesday 17 March 2026 00:39:42 +0000 (0:00:00.758) 0:00:00.882 ********* 2026-03-17 00:40:01.537747 | orchestrator | ok: [testbed-manager] 2026-03-17 00:40:01.537764 | orchestrator | 2026-03-17 00:40:01.537808 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-17 00:40:01.537824 | orchestrator | 2026-03-17 00:40:01.537840 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-17 00:40:01.537858 | orchestrator | Tuesday 17 March 2026 00:39:44 +0000 (0:00:02.116) 0:00:02.999 ********* 2026-03-17 00:40:01.537873 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:40:01.537890 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:40:01.537905 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:40:01.537921 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:40:01.537937 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:40:01.537953 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:40:01.537968 | orchestrator | 2026-03-17 00:40:01.537983 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-17 00:40:01.537999 | orchestrator | 2026-03-17 00:40:01.538067 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-17 00:40:01.538105 | orchestrator | Tuesday 17 March 2026 00:39:46 +0000 (0:00:01.899) 0:00:04.898 ********* 2026-03-17 00:40:01.538123 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-17 00:40:01.538140 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-17 00:40:01.538156 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-17 00:40:01.538171 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-17 00:40:01.538187 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-17 00:40:01.538203 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-17 00:40:01.538218 | orchestrator | 2026-03-17 00:40:01.538234 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-17 00:40:01.538249 | orchestrator | Tuesday 17 March 2026 00:39:48 +0000 (0:00:01.535) 0:00:06.433 ********* 2026-03-17 00:40:01.538265 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:40:01.538281 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:40:01.538296 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:40:01.538311 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:40:01.538349 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:40:01.538361 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:40:01.538375 | orchestrator | 2026-03-17 00:40:01.538388 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-17 00:40:01.538401 | orchestrator | Tuesday 17 March 2026 00:39:51 +0000 (0:00:02.902) 0:00:09.336 ********* 2026-03-17 00:40:01.538416 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:40:01.538430 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:40:01.538445 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:40:01.538460 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:40:01.538475 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:40:01.538489 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:40:01.538503 | orchestrator | 2026-03-17 00:40:01.538518 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-17 00:40:01.538532 | orchestrator | 2026-03-17 00:40:01.538546 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-17 00:40:01.538561 | orchestrator | Tuesday 17 March 2026 00:39:51 +0000 (0:00:00.649) 0:00:09.985 ********* 2026-03-17 00:40:01.538575 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:40:01.538589 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:40:01.538603 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:40:01.538640 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:40:01.538656 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:40:01.538670 | orchestrator | changed: [testbed-manager] 2026-03-17 00:40:01.538685 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:40:01.538712 | orchestrator | 2026-03-17 00:40:01.538727 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-17 00:40:01.538741 | orchestrator | Tuesday 17 March 2026 00:39:53 +0000 (0:00:01.669) 0:00:11.655 ********* 2026-03-17 00:40:01.538756 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:40:01.538771 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:40:01.538785 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:40:01.538800 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:40:01.538815 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:40:01.538829 | orchestrator | changed: [testbed-manager] 2026-03-17 00:40:01.538867 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:40:01.538883 | orchestrator | 2026-03-17 00:40:01.538897 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-17 00:40:01.538912 | orchestrator | Tuesday 17 March 2026 00:39:54 +0000 (0:00:01.340) 0:00:12.995 ********* 2026-03-17 00:40:01.538927 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:40:01.538941 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:40:01.538956 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:40:01.538971 | orchestrator | ok: [testbed-manager] 2026-03-17 00:40:01.538985 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:40:01.539000 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:40:01.539014 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:40:01.539029 | orchestrator | 2026-03-17 00:40:01.539043 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-17 00:40:01.539058 | orchestrator | Tuesday 17 March 2026 00:39:56 +0000 (0:00:01.334) 0:00:14.330 ********* 2026-03-17 00:40:01.539073 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:40:01.539088 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:40:01.539102 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:40:01.539117 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:40:01.539129 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:40:01.539142 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:40:01.539155 | orchestrator | changed: [testbed-manager] 2026-03-17 00:40:01.539167 | orchestrator | 2026-03-17 00:40:01.539179 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-17 00:40:01.539193 | orchestrator | Tuesday 17 March 2026 00:39:57 +0000 (0:00:01.632) 0:00:15.962 ********* 2026-03-17 00:40:01.539208 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:40:01.539222 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:40:01.539237 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:40:01.539251 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:40:01.539266 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:40:01.539280 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:40:01.539295 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:40:01.539309 | orchestrator | 2026-03-17 00:40:01.539372 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-17 00:40:01.539390 | orchestrator | 2026-03-17 00:40:01.539405 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-17 00:40:01.539420 | orchestrator | Tuesday 17 March 2026 00:39:58 +0000 (0:00:00.577) 0:00:16.540 ********* 2026-03-17 00:40:01.539435 | orchestrator | ok: [testbed-manager] 2026-03-17 00:40:01.539448 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:40:01.539463 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:40:01.539477 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:40:01.539492 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:40:01.539506 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:40:01.539530 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:40:01.539545 | orchestrator | 2026-03-17 00:40:01.539560 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:40:01.539575 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:40:01.539592 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:40:01.539617 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:40:01.539630 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:40:01.539642 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:40:01.539654 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:40:01.539667 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:40:01.539679 | orchestrator | 2026-03-17 00:40:01.539691 | orchestrator | 2026-03-17 00:40:01.539704 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:40:01.539716 | orchestrator | Tuesday 17 March 2026 00:40:01 +0000 (0:00:03.120) 0:00:19.660 ********* 2026-03-17 00:40:01.539726 | orchestrator | =============================================================================== 2026-03-17 00:40:01.539738 | orchestrator | Install python3-docker -------------------------------------------------- 3.12s 2026-03-17 00:40:01.539748 | orchestrator | Run update-ca-certificates ---------------------------------------------- 2.90s 2026-03-17 00:40:01.539759 | orchestrator | Apply netplan configuration --------------------------------------------- 2.12s 2026-03-17 00:40:01.539770 | orchestrator | Apply netplan configuration --------------------------------------------- 1.90s 2026-03-17 00:40:01.539779 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.67s 2026-03-17 00:40:01.539790 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.63s 2026-03-17 00:40:01.539802 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.54s 2026-03-17 00:40:01.539814 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.34s 2026-03-17 00:40:01.539826 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.33s 2026-03-17 00:40:01.539838 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.76s 2026-03-17 00:40:01.539851 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.65s 2026-03-17 00:40:01.539873 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.58s 2026-03-17 00:40:02.287952 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-17 00:40:14.287812 | orchestrator | 2026-03-17 00:40:14 | INFO  | Task 82a8c943-caa2-467b-bc38-7338bdb5811c (reboot) was prepared for execution. 2026-03-17 00:40:14.287924 | orchestrator | 2026-03-17 00:40:14 | INFO  | It takes a moment until task 82a8c943-caa2-467b-bc38-7338bdb5811c (reboot) has been started and output is visible here. 2026-03-17 00:40:24.335933 | orchestrator | 2026-03-17 00:40:24.336025 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-17 00:40:24.336037 | orchestrator | 2026-03-17 00:40:24.336046 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-17 00:40:24.336055 | orchestrator | Tuesday 17 March 2026 00:40:18 +0000 (0:00:00.216) 0:00:00.216 ********* 2026-03-17 00:40:24.336063 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:40:24.336072 | orchestrator | 2026-03-17 00:40:24.336080 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-17 00:40:24.336088 | orchestrator | Tuesday 17 March 2026 00:40:18 +0000 (0:00:00.111) 0:00:00.328 ********* 2026-03-17 00:40:24.336096 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:40:24.336104 | orchestrator | 2026-03-17 00:40:24.336112 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-17 00:40:24.336138 | orchestrator | Tuesday 17 March 2026 00:40:19 +0000 (0:00:00.905) 0:00:01.234 ********* 2026-03-17 00:40:24.336147 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:40:24.336154 | orchestrator | 2026-03-17 00:40:24.336162 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-17 00:40:24.336170 | orchestrator | 2026-03-17 00:40:24.336178 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-17 00:40:24.336185 | orchestrator | Tuesday 17 March 2026 00:40:19 +0000 (0:00:00.127) 0:00:01.362 ********* 2026-03-17 00:40:24.336193 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:40:24.336201 | orchestrator | 2026-03-17 00:40:24.336209 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-17 00:40:24.336216 | orchestrator | Tuesday 17 March 2026 00:40:19 +0000 (0:00:00.097) 0:00:01.460 ********* 2026-03-17 00:40:24.336224 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:40:24.336232 | orchestrator | 2026-03-17 00:40:24.336239 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-17 00:40:24.336258 | orchestrator | Tuesday 17 March 2026 00:40:20 +0000 (0:00:00.679) 0:00:02.139 ********* 2026-03-17 00:40:24.336266 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:40:24.336274 | orchestrator | 2026-03-17 00:40:24.336282 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-17 00:40:24.336290 | orchestrator | 2026-03-17 00:40:24.336352 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-17 00:40:24.336361 | orchestrator | Tuesday 17 March 2026 00:40:20 +0000 (0:00:00.097) 0:00:02.237 ********* 2026-03-17 00:40:24.336369 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:40:24.336377 | orchestrator | 2026-03-17 00:40:24.336384 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-17 00:40:24.336392 | orchestrator | Tuesday 17 March 2026 00:40:20 +0000 (0:00:00.153) 0:00:02.390 ********* 2026-03-17 00:40:24.336400 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:40:24.336408 | orchestrator | 2026-03-17 00:40:24.336416 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-17 00:40:24.336424 | orchestrator | Tuesday 17 March 2026 00:40:21 +0000 (0:00:00.652) 0:00:03.043 ********* 2026-03-17 00:40:24.336432 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:40:24.336440 | orchestrator | 2026-03-17 00:40:24.336448 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-17 00:40:24.336455 | orchestrator | 2026-03-17 00:40:24.336463 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-17 00:40:24.336471 | orchestrator | Tuesday 17 March 2026 00:40:21 +0000 (0:00:00.093) 0:00:03.137 ********* 2026-03-17 00:40:24.336478 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:40:24.336486 | orchestrator | 2026-03-17 00:40:24.336496 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-17 00:40:24.336504 | orchestrator | Tuesday 17 March 2026 00:40:21 +0000 (0:00:00.098) 0:00:03.235 ********* 2026-03-17 00:40:24.336513 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:40:24.336522 | orchestrator | 2026-03-17 00:40:24.336531 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-17 00:40:24.336540 | orchestrator | Tuesday 17 March 2026 00:40:22 +0000 (0:00:00.714) 0:00:03.949 ********* 2026-03-17 00:40:24.336548 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:40:24.336557 | orchestrator | 2026-03-17 00:40:24.336566 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-17 00:40:24.336575 | orchestrator | 2026-03-17 00:40:24.336584 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-17 00:40:24.336592 | orchestrator | Tuesday 17 March 2026 00:40:22 +0000 (0:00:00.100) 0:00:04.049 ********* 2026-03-17 00:40:24.336602 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:40:24.336611 | orchestrator | 2026-03-17 00:40:24.336620 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-17 00:40:24.336629 | orchestrator | Tuesday 17 March 2026 00:40:22 +0000 (0:00:00.095) 0:00:04.144 ********* 2026-03-17 00:40:24.336644 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:40:24.336653 | orchestrator | 2026-03-17 00:40:24.336662 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-17 00:40:24.336671 | orchestrator | Tuesday 17 March 2026 00:40:23 +0000 (0:00:00.665) 0:00:04.810 ********* 2026-03-17 00:40:24.336680 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:40:24.336688 | orchestrator | 2026-03-17 00:40:24.336697 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-17 00:40:24.336704 | orchestrator | 2026-03-17 00:40:24.336712 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-17 00:40:24.336720 | orchestrator | Tuesday 17 March 2026 00:40:23 +0000 (0:00:00.106) 0:00:04.917 ********* 2026-03-17 00:40:24.336728 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:40:24.336736 | orchestrator | 2026-03-17 00:40:24.336744 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-17 00:40:24.336751 | orchestrator | Tuesday 17 March 2026 00:40:23 +0000 (0:00:00.085) 0:00:05.002 ********* 2026-03-17 00:40:24.336759 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:40:24.336767 | orchestrator | 2026-03-17 00:40:24.336775 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-17 00:40:24.336783 | orchestrator | Tuesday 17 March 2026 00:40:24 +0000 (0:00:00.691) 0:00:05.694 ********* 2026-03-17 00:40:24.336804 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:40:24.336813 | orchestrator | 2026-03-17 00:40:24.336821 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:40:24.336829 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:40:24.336838 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:40:24.336846 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:40:24.336854 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:40:24.336862 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:40:24.336870 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:40:24.336877 | orchestrator | 2026-03-17 00:40:24.336885 | orchestrator | 2026-03-17 00:40:24.336893 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:40:24.336901 | orchestrator | Tuesday 17 March 2026 00:40:24 +0000 (0:00:00.034) 0:00:05.728 ********* 2026-03-17 00:40:24.336913 | orchestrator | =============================================================================== 2026-03-17 00:40:24.336921 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.31s 2026-03-17 00:40:24.336929 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.64s 2026-03-17 00:40:24.336937 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.56s 2026-03-17 00:40:24.513669 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-17 00:40:36.241591 | orchestrator | 2026-03-17 00:40:36 | INFO  | Task 372d17a1-b2d5-4524-9dd8-bde2515698c5 (wait-for-connection) was prepared for execution. 2026-03-17 00:40:36.241668 | orchestrator | 2026-03-17 00:40:36 | INFO  | It takes a moment until task 372d17a1-b2d5-4524-9dd8-bde2515698c5 (wait-for-connection) has been started and output is visible here. 2026-03-17 00:40:51.888198 | orchestrator | 2026-03-17 00:40:51.888416 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-17 00:40:51.888443 | orchestrator | 2026-03-17 00:40:51.888458 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-17 00:40:51.888471 | orchestrator | Tuesday 17 March 2026 00:40:40 +0000 (0:00:00.167) 0:00:00.167 ********* 2026-03-17 00:40:51.888484 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:40:51.888499 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:40:51.888513 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:40:51.888527 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:40:51.888541 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:40:51.888556 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:40:51.888571 | orchestrator | 2026-03-17 00:40:51.888585 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:40:51.888599 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:40:51.888614 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:40:51.888627 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:40:51.888640 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:40:51.888652 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:40:51.888665 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:40:51.888680 | orchestrator | 2026-03-17 00:40:51.888694 | orchestrator | 2026-03-17 00:40:51.888709 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:40:51.888722 | orchestrator | Tuesday 17 March 2026 00:40:51 +0000 (0:00:11.487) 0:00:11.654 ********* 2026-03-17 00:40:51.888736 | orchestrator | =============================================================================== 2026-03-17 00:40:51.888749 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.49s 2026-03-17 00:40:52.141105 | orchestrator | + osism apply hddtemp 2026-03-17 00:41:04.190218 | orchestrator | 2026-03-17 00:41:04 | INFO  | Task 7d723dbe-0238-42d0-b8bb-8b29aaf30663 (hddtemp) was prepared for execution. 2026-03-17 00:41:04.190411 | orchestrator | 2026-03-17 00:41:04 | INFO  | It takes a moment until task 7d723dbe-0238-42d0-b8bb-8b29aaf30663 (hddtemp) has been started and output is visible here. 2026-03-17 00:41:32.210466 | orchestrator | 2026-03-17 00:41:32.210580 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-17 00:41:32.210599 | orchestrator | 2026-03-17 00:41:32.210613 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-17 00:41:32.210626 | orchestrator | Tuesday 17 March 2026 00:41:08 +0000 (0:00:00.228) 0:00:00.228 ********* 2026-03-17 00:41:32.210639 | orchestrator | ok: [testbed-manager] 2026-03-17 00:41:32.210654 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:41:32.210668 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:41:32.210681 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:41:32.210694 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:41:32.210707 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:41:32.210720 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:41:32.210733 | orchestrator | 2026-03-17 00:41:32.210746 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-17 00:41:32.210759 | orchestrator | Tuesday 17 March 2026 00:41:08 +0000 (0:00:00.583) 0:00:00.811 ********* 2026-03-17 00:41:32.210774 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:41:32.210817 | orchestrator | 2026-03-17 00:41:32.210832 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-17 00:41:32.210845 | orchestrator | Tuesday 17 March 2026 00:41:09 +0000 (0:00:01.001) 0:00:01.813 ********* 2026-03-17 00:41:32.210858 | orchestrator | ok: [testbed-manager] 2026-03-17 00:41:32.210872 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:41:32.210885 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:41:32.210898 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:41:32.210911 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:41:32.210925 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:41:32.210938 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:41:32.210952 | orchestrator | 2026-03-17 00:41:32.210966 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-17 00:41:32.210995 | orchestrator | Tuesday 17 March 2026 00:41:11 +0000 (0:00:01.971) 0:00:03.785 ********* 2026-03-17 00:41:32.211009 | orchestrator | changed: [testbed-manager] 2026-03-17 00:41:32.211023 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:41:32.211036 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:41:32.211050 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:41:32.211063 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:41:32.211076 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:41:32.211089 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:41:32.211103 | orchestrator | 2026-03-17 00:41:32.211118 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-17 00:41:32.211132 | orchestrator | Tuesday 17 March 2026 00:41:12 +0000 (0:00:01.002) 0:00:04.788 ********* 2026-03-17 00:41:32.211145 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:41:32.211158 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:41:32.211172 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:41:32.211186 | orchestrator | ok: [testbed-manager] 2026-03-17 00:41:32.211199 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:41:32.211212 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:41:32.211259 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:41:32.211271 | orchestrator | 2026-03-17 00:41:32.211282 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-17 00:41:32.211294 | orchestrator | Tuesday 17 March 2026 00:41:13 +0000 (0:00:01.117) 0:00:05.905 ********* 2026-03-17 00:41:32.211306 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:41:32.211317 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:41:32.211328 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:41:32.211339 | orchestrator | changed: [testbed-manager] 2026-03-17 00:41:32.211351 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:41:32.211363 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:41:32.211375 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:41:32.211387 | orchestrator | 2026-03-17 00:41:32.211399 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-17 00:41:32.211410 | orchestrator | Tuesday 17 March 2026 00:41:14 +0000 (0:00:00.677) 0:00:06.582 ********* 2026-03-17 00:41:32.211422 | orchestrator | changed: [testbed-manager] 2026-03-17 00:41:32.211434 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:41:32.211445 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:41:32.211456 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:41:32.211466 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:41:32.211477 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:41:32.211487 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:41:32.211497 | orchestrator | 2026-03-17 00:41:32.211509 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-17 00:41:32.211521 | orchestrator | Tuesday 17 March 2026 00:41:28 +0000 (0:00:14.398) 0:00:20.980 ********* 2026-03-17 00:41:32.211532 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:41:32.211545 | orchestrator | 2026-03-17 00:41:32.211570 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-17 00:41:32.211581 | orchestrator | Tuesday 17 March 2026 00:41:29 +0000 (0:00:01.130) 0:00:22.111 ********* 2026-03-17 00:41:32.211592 | orchestrator | changed: [testbed-manager] 2026-03-17 00:41:32.211604 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:41:32.211615 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:41:32.211626 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:41:32.211637 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:41:32.211648 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:41:32.211659 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:41:32.211670 | orchestrator | 2026-03-17 00:41:32.211682 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:41:32.211694 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:41:32.211731 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:41:32.211744 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:41:32.211756 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:41:32.211769 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:41:32.211780 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:41:32.211791 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:41:32.211802 | orchestrator | 2026-03-17 00:41:32.211812 | orchestrator | 2026-03-17 00:41:32.211823 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:41:32.211833 | orchestrator | Tuesday 17 March 2026 00:41:31 +0000 (0:00:01.903) 0:00:24.015 ********* 2026-03-17 00:41:32.211844 | orchestrator | =============================================================================== 2026-03-17 00:41:32.211855 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 14.40s 2026-03-17 00:41:32.211865 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.97s 2026-03-17 00:41:32.211874 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.90s 2026-03-17 00:41:32.211893 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.13s 2026-03-17 00:41:32.211904 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.12s 2026-03-17 00:41:32.211915 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.00s 2026-03-17 00:41:32.211925 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.00s 2026-03-17 00:41:32.211936 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.68s 2026-03-17 00:41:32.211947 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.58s 2026-03-17 00:41:32.527499 | orchestrator | ++ semver 9.5.0 7.1.1 2026-03-17 00:41:32.572637 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-17 00:41:32.572752 | orchestrator | + sudo systemctl restart manager.service 2026-03-17 00:41:46.373315 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-17 00:41:46.373433 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-17 00:41:46.373453 | orchestrator | + local max_attempts=60 2026-03-17 00:41:46.373468 | orchestrator | + local name=ceph-ansible 2026-03-17 00:41:46.373479 | orchestrator | + local attempt_num=1 2026-03-17 00:41:46.373491 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:41:46.404388 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:41:46.404464 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:41:46.404474 | orchestrator | + sleep 5 2026-03-17 00:41:51.410729 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:41:51.453991 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:41:51.454138 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:41:51.454155 | orchestrator | + sleep 5 2026-03-17 00:41:56.456906 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:41:56.496302 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:41:56.496426 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:41:56.496444 | orchestrator | + sleep 5 2026-03-17 00:42:01.499994 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:42:01.538677 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:42:01.538778 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:42:01.538794 | orchestrator | + sleep 5 2026-03-17 00:42:06.543110 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:42:06.577663 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:42:06.577741 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:42:06.577751 | orchestrator | + sleep 5 2026-03-17 00:42:11.582294 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:42:11.622378 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:42:11.622485 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:42:11.622504 | orchestrator | + sleep 5 2026-03-17 00:42:16.627044 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:42:16.666696 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:42:16.666785 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:42:16.666797 | orchestrator | + sleep 5 2026-03-17 00:42:21.669505 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:42:21.718384 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-17 00:42:21.738490 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:42:21.738575 | orchestrator | + sleep 5 2026-03-17 00:42:26.739875 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:42:26.759550 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-17 00:42:26.759640 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:42:26.760013 | orchestrator | + sleep 5 2026-03-17 00:42:31.762518 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:42:31.802313 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-17 00:42:31.802533 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:42:31.802553 | orchestrator | + sleep 5 2026-03-17 00:42:36.806489 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:42:36.836519 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-17 00:42:36.836603 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:42:36.836616 | orchestrator | + sleep 5 2026-03-17 00:42:41.842468 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:42:41.880592 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-17 00:42:41.880680 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:42:41.880694 | orchestrator | + sleep 5 2026-03-17 00:42:46.885505 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:42:46.921110 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-17 00:42:46.921231 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:42:46.921246 | orchestrator | + sleep 5 2026-03-17 00:42:51.924386 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:42:51.960874 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:42:51.960963 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-17 00:42:51.960978 | orchestrator | + local max_attempts=60 2026-03-17 00:42:51.960991 | orchestrator | + local name=kolla-ansible 2026-03-17 00:42:51.961002 | orchestrator | + local attempt_num=1 2026-03-17 00:42:51.961674 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-17 00:42:52.005187 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:42:52.005269 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-17 00:42:52.005282 | orchestrator | + local max_attempts=60 2026-03-17 00:42:52.005330 | orchestrator | + local name=osism-ansible 2026-03-17 00:42:52.005352 | orchestrator | + local attempt_num=1 2026-03-17 00:42:52.005535 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-17 00:42:52.031705 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:42:52.031777 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-17 00:42:52.031790 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-17 00:42:52.190653 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-17 00:42:52.318525 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-17 00:42:52.453570 | orchestrator | ARA in osism-ansible already disabled. 2026-03-17 00:42:52.612009 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-17 00:42:52.612423 | orchestrator | + osism apply gather-facts 2026-03-17 00:43:04.582204 | orchestrator | 2026-03-17 00:43:04 | INFO  | Task 86c59aea-6427-4e32-85df-ce0e8a0c81fb (gather-facts) was prepared for execution. 2026-03-17 00:43:04.582314 | orchestrator | 2026-03-17 00:43:04 | INFO  | It takes a moment until task 86c59aea-6427-4e32-85df-ce0e8a0c81fb (gather-facts) has been started and output is visible here. 2026-03-17 00:43:17.756786 | orchestrator | 2026-03-17 00:43:17.756897 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-17 00:43:17.756913 | orchestrator | 2026-03-17 00:43:17.756925 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-17 00:43:17.756936 | orchestrator | Tuesday 17 March 2026 00:43:08 +0000 (0:00:00.190) 0:00:00.190 ********* 2026-03-17 00:43:17.756948 | orchestrator | ok: [testbed-manager] 2026-03-17 00:43:17.756961 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:43:17.756972 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:43:17.756983 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:43:17.756994 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:43:17.757005 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:43:17.757016 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:43:17.757048 | orchestrator | 2026-03-17 00:43:17.757061 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-17 00:43:17.757071 | orchestrator | 2026-03-17 00:43:17.757082 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-17 00:43:17.757138 | orchestrator | Tuesday 17 March 2026 00:43:16 +0000 (0:00:08.601) 0:00:08.791 ********* 2026-03-17 00:43:17.757155 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:43:17.757174 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:43:17.757190 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:43:17.757206 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:43:17.757226 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:43:17.757245 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:43:17.757265 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:43:17.757283 | orchestrator | 2026-03-17 00:43:17.757301 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:43:17.757314 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:43:17.757328 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:43:17.757341 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:43:17.757354 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:43:17.757366 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:43:17.757379 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:43:17.757392 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:43:17.757431 | orchestrator | 2026-03-17 00:43:17.757443 | orchestrator | 2026-03-17 00:43:17.757454 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:43:17.757465 | orchestrator | Tuesday 17 March 2026 00:43:17 +0000 (0:00:00.499) 0:00:09.291 ********* 2026-03-17 00:43:17.757476 | orchestrator | =============================================================================== 2026-03-17 00:43:17.757486 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.60s 2026-03-17 00:43:17.757497 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2026-03-17 00:43:18.034511 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-17 00:43:18.047536 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-17 00:43:18.063637 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-17 00:43:18.074847 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-17 00:43:18.084743 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-17 00:43:18.093758 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-17 00:43:18.109914 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-17 00:43:18.121634 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-17 00:43:18.147389 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-17 00:43:18.181702 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-17 00:43:18.195296 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-17 00:43:18.209513 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-17 00:43:18.218990 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-17 00:43:18.231086 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-17 00:43:18.241003 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-17 00:43:18.250303 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-17 00:43:18.266677 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-17 00:43:18.276660 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-17 00:43:18.287011 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-17 00:43:18.299820 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-17 00:43:18.316357 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-17 00:43:18.327527 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-17 00:43:18.344072 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-17 00:43:18.354364 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-17 00:43:18.864775 | orchestrator | ok: Runtime: 0:24:05.317066 2026-03-17 00:43:18.977058 | 2026-03-17 00:43:18.977266 | TASK [Deploy services] 2026-03-17 00:43:19.525647 | orchestrator | skipping: Conditional result was False 2026-03-17 00:43:19.541784 | 2026-03-17 00:43:19.541937 | TASK [Deploy in a nutshell] 2026-03-17 00:43:20.239681 | orchestrator | + set -e 2026-03-17 00:43:20.239873 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-17 00:43:20.239897 | orchestrator | ++ export INTERACTIVE=false 2026-03-17 00:43:20.239918 | orchestrator | ++ INTERACTIVE=false 2026-03-17 00:43:20.239931 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-17 00:43:20.239944 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-17 00:43:20.239957 | orchestrator | + source /opt/manager-vars.sh 2026-03-17 00:43:20.240000 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-17 00:43:20.240029 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-17 00:43:20.240043 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-17 00:43:20.240058 | orchestrator | ++ CEPH_VERSION=reef 2026-03-17 00:43:20.240070 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-17 00:43:20.240122 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-17 00:43:20.240136 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-17 00:43:20.240157 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-17 00:43:20.240168 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-17 00:43:20.240182 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-17 00:43:20.240193 | orchestrator | ++ export ARA=false 2026-03-17 00:43:20.240204 | orchestrator | ++ ARA=false 2026-03-17 00:43:20.240215 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-17 00:43:20.240227 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-17 00:43:20.240238 | orchestrator | ++ export TEMPEST=true 2026-03-17 00:43:20.240248 | orchestrator | ++ TEMPEST=true 2026-03-17 00:43:20.240259 | orchestrator | ++ export IS_ZUUL=true 2026-03-17 00:43:20.240270 | orchestrator | ++ IS_ZUUL=true 2026-03-17 00:43:20.240281 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.64 2026-03-17 00:43:20.240292 | orchestrator | 2026-03-17 00:43:20.240304 | orchestrator | # PULL IMAGES 2026-03-17 00:43:20.240314 | orchestrator | 2026-03-17 00:43:20.240326 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.64 2026-03-17 00:43:20.240337 | orchestrator | ++ export EXTERNAL_API=false 2026-03-17 00:43:20.240348 | orchestrator | ++ EXTERNAL_API=false 2026-03-17 00:43:20.240358 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-17 00:43:20.240370 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-17 00:43:20.240381 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-17 00:43:20.240392 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-17 00:43:20.240403 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-17 00:43:20.240421 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-17 00:43:20.240432 | orchestrator | + echo 2026-03-17 00:43:20.240443 | orchestrator | + echo '# PULL IMAGES' 2026-03-17 00:43:20.240454 | orchestrator | + echo 2026-03-17 00:43:20.240478 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-17 00:43:20.280455 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-17 00:43:20.280565 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-17 00:43:21.936239 | orchestrator | 2026-03-17 00:43:21 | INFO  | Trying to run play pull-images in environment custom 2026-03-17 00:43:32.081359 | orchestrator | 2026-03-17 00:43:32 | INFO  | Task 8a10ef31-7329-437a-a854-bdaef757d69d (pull-images) was prepared for execution. 2026-03-17 00:43:32.081575 | orchestrator | 2026-03-17 00:43:32 | INFO  | Task 8a10ef31-7329-437a-a854-bdaef757d69d is running in background. No more output. Check ARA for logs. 2026-03-17 00:43:34.081836 | orchestrator | 2026-03-17 00:43:34 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-17 00:43:44.298404 | orchestrator | 2026-03-17 00:43:44 | INFO  | Task 55b4842f-1191-451f-9ef7-ee81968fe81e (wipe-partitions) was prepared for execution. 2026-03-17 00:43:44.298517 | orchestrator | 2026-03-17 00:43:44 | INFO  | It takes a moment until task 55b4842f-1191-451f-9ef7-ee81968fe81e (wipe-partitions) has been started and output is visible here. 2026-03-17 00:43:56.014219 | orchestrator | 2026-03-17 00:43:56.014332 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-17 00:43:56.014349 | orchestrator | 2026-03-17 00:43:56.014362 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-17 00:43:56.014381 | orchestrator | Tuesday 17 March 2026 00:43:48 +0000 (0:00:00.093) 0:00:00.093 ********* 2026-03-17 00:43:56.014393 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:43:56.014405 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:43:56.014417 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:43:56.014428 | orchestrator | 2026-03-17 00:43:56.014440 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-17 00:43:56.014482 | orchestrator | Tuesday 17 March 2026 00:43:48 +0000 (0:00:00.540) 0:00:00.633 ********* 2026-03-17 00:43:56.014494 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:43:56.014505 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:43:56.014516 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:43:56.014532 | orchestrator | 2026-03-17 00:43:56.014544 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-17 00:43:56.014555 | orchestrator | Tuesday 17 March 2026 00:43:48 +0000 (0:00:00.292) 0:00:00.926 ********* 2026-03-17 00:43:56.014566 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:43:56.014578 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:43:56.014589 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:43:56.014600 | orchestrator | 2026-03-17 00:43:56.014611 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-17 00:43:56.014622 | orchestrator | Tuesday 17 March 2026 00:43:49 +0000 (0:00:00.589) 0:00:01.515 ********* 2026-03-17 00:43:56.014633 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:43:56.014645 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:43:56.014658 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:43:56.014671 | orchestrator | 2026-03-17 00:43:56.014683 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-17 00:43:56.014696 | orchestrator | Tuesday 17 March 2026 00:43:49 +0000 (0:00:00.185) 0:00:01.701 ********* 2026-03-17 00:43:56.014708 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-17 00:43:56.014726 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-17 00:43:56.014739 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-17 00:43:56.014751 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-17 00:43:56.014764 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-17 00:43:56.014777 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-17 00:43:56.014790 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-17 00:43:56.014803 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-17 00:43:56.014816 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-17 00:43:56.014829 | orchestrator | 2026-03-17 00:43:56.014842 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-17 00:43:56.014855 | orchestrator | Tuesday 17 March 2026 00:43:50 +0000 (0:00:01.215) 0:00:02.916 ********* 2026-03-17 00:43:56.014869 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-17 00:43:56.014881 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-17 00:43:56.014894 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-17 00:43:56.014907 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-17 00:43:56.014919 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-17 00:43:56.014932 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-17 00:43:56.014944 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-17 00:43:56.014957 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-17 00:43:56.014969 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-17 00:43:56.014982 | orchestrator | 2026-03-17 00:43:56.014995 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-17 00:43:56.015006 | orchestrator | Tuesday 17 March 2026 00:43:52 +0000 (0:00:01.470) 0:00:04.387 ********* 2026-03-17 00:43:56.015017 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-17 00:43:56.015028 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-17 00:43:56.015039 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-17 00:43:56.015079 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-17 00:43:56.015091 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-17 00:43:56.015102 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-17 00:43:56.015113 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-17 00:43:56.015131 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-17 00:43:56.015151 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-17 00:43:56.015163 | orchestrator | 2026-03-17 00:43:56.015173 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-17 00:43:56.015184 | orchestrator | Tuesday 17 March 2026 00:43:54 +0000 (0:00:02.206) 0:00:06.593 ********* 2026-03-17 00:43:56.015195 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:43:56.015206 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:43:56.015217 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:43:56.015228 | orchestrator | 2026-03-17 00:43:56.015239 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-17 00:43:56.015249 | orchestrator | Tuesday 17 March 2026 00:43:55 +0000 (0:00:00.573) 0:00:07.166 ********* 2026-03-17 00:43:56.015261 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:43:56.015271 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:43:56.015282 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:43:56.015293 | orchestrator | 2026-03-17 00:43:56.015304 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:43:56.015316 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:43:56.015329 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:43:56.015358 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:43:56.015370 | orchestrator | 2026-03-17 00:43:56.015381 | orchestrator | 2026-03-17 00:43:56.015392 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:43:56.015403 | orchestrator | Tuesday 17 March 2026 00:43:55 +0000 (0:00:00.668) 0:00:07.835 ********* 2026-03-17 00:43:56.015414 | orchestrator | =============================================================================== 2026-03-17 00:43:56.015425 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.21s 2026-03-17 00:43:56.015436 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.47s 2026-03-17 00:43:56.015447 | orchestrator | Check device availability ----------------------------------------------- 1.22s 2026-03-17 00:43:56.015458 | orchestrator | Request device events from the kernel ----------------------------------- 0.67s 2026-03-17 00:43:56.015469 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.59s 2026-03-17 00:43:56.015480 | orchestrator | Reload udev rules ------------------------------------------------------- 0.57s 2026-03-17 00:43:56.015491 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.54s 2026-03-17 00:43:56.015502 | orchestrator | Remove all rook related logical devices --------------------------------- 0.29s 2026-03-17 00:43:56.015512 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.19s 2026-03-17 00:44:08.256867 | orchestrator | 2026-03-17 00:44:08 | INFO  | Task 07873666-157f-4e29-96f5-1ee52203b6e0 (facts) was prepared for execution. 2026-03-17 00:44:08.256959 | orchestrator | 2026-03-17 00:44:08 | INFO  | It takes a moment until task 07873666-157f-4e29-96f5-1ee52203b6e0 (facts) has been started and output is visible here. 2026-03-17 00:44:18.970210 | orchestrator | 2026-03-17 00:44:18.970288 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-17 00:44:18.970295 | orchestrator | 2026-03-17 00:44:18.970300 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-17 00:44:18.970305 | orchestrator | Tuesday 17 March 2026 00:44:11 +0000 (0:00:00.196) 0:00:00.196 ********* 2026-03-17 00:44:18.970310 | orchestrator | ok: [testbed-manager] 2026-03-17 00:44:18.970315 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:44:18.970319 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:44:18.970323 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:44:18.970342 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:44:18.970346 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:44:18.970350 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:44:18.970353 | orchestrator | 2026-03-17 00:44:18.970357 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-17 00:44:18.970361 | orchestrator | Tuesday 17 March 2026 00:44:12 +0000 (0:00:00.893) 0:00:01.090 ********* 2026-03-17 00:44:18.970365 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:44:18.970370 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:44:18.970375 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:44:18.970379 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:44:18.970382 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:18.970386 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:18.970390 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:44:18.970394 | orchestrator | 2026-03-17 00:44:18.970398 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-17 00:44:18.970401 | orchestrator | 2026-03-17 00:44:18.970405 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-17 00:44:18.970409 | orchestrator | Tuesday 17 March 2026 00:44:13 +0000 (0:00:01.111) 0:00:02.201 ********* 2026-03-17 00:44:18.970412 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:44:18.970416 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:44:18.970420 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:44:18.970424 | orchestrator | ok: [testbed-manager] 2026-03-17 00:44:18.970428 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:44:18.970432 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:44:18.970435 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:44:18.970439 | orchestrator | 2026-03-17 00:44:18.970443 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-17 00:44:18.970447 | orchestrator | 2026-03-17 00:44:18.970450 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-17 00:44:18.970454 | orchestrator | Tuesday 17 March 2026 00:44:18 +0000 (0:00:04.791) 0:00:06.992 ********* 2026-03-17 00:44:18.970458 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:44:18.970462 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:44:18.970465 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:44:18.970469 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:44:18.970484 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:18.970488 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:18.970491 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:44:18.970495 | orchestrator | 2026-03-17 00:44:18.970499 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:44:18.970503 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:44:18.970508 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:44:18.970512 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:44:18.970516 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:44:18.970519 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:44:18.970523 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:44:18.970527 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:44:18.970531 | orchestrator | 2026-03-17 00:44:18.970535 | orchestrator | 2026-03-17 00:44:18.970538 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:44:18.970546 | orchestrator | Tuesday 17 March 2026 00:44:18 +0000 (0:00:00.478) 0:00:07.470 ********* 2026-03-17 00:44:18.970550 | orchestrator | =============================================================================== 2026-03-17 00:44:18.970553 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.79s 2026-03-17 00:44:18.970557 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.11s 2026-03-17 00:44:18.970561 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.89s 2026-03-17 00:44:18.970565 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.48s 2026-03-17 00:44:20.932537 | orchestrator | 2026-03-17 00:44:20 | INFO  | Task 8994c03b-5de3-469c-bd2d-a91b16bfd857 (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-17 00:44:20.932708 | orchestrator | 2026-03-17 00:44:20 | INFO  | It takes a moment until task 8994c03b-5de3-469c-bd2d-a91b16bfd857 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-17 00:44:30.211460 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-17 00:44:30.211570 | orchestrator | 2.16.14 2026-03-17 00:44:30.211585 | orchestrator | 2026-03-17 00:44:30.211595 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-17 00:44:30.211604 | orchestrator | 2026-03-17 00:44:30.211613 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-17 00:44:30.211622 | orchestrator | Tuesday 17 March 2026 00:44:24 +0000 (0:00:00.239) 0:00:00.239 ********* 2026-03-17 00:44:30.211633 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-17 00:44:30.211641 | orchestrator | 2026-03-17 00:44:30.211649 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-17 00:44:30.211657 | orchestrator | Tuesday 17 March 2026 00:44:24 +0000 (0:00:00.223) 0:00:00.463 ********* 2026-03-17 00:44:30.211665 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:44:30.211673 | orchestrator | 2026-03-17 00:44:30.211681 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:30.211689 | orchestrator | Tuesday 17 March 2026 00:44:24 +0000 (0:00:00.201) 0:00:00.665 ********* 2026-03-17 00:44:30.211697 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-17 00:44:30.211706 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-17 00:44:30.211714 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-17 00:44:30.211721 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-17 00:44:30.211729 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-17 00:44:30.211737 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-17 00:44:30.211745 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-17 00:44:30.211753 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-17 00:44:30.211760 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-17 00:44:30.211768 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-17 00:44:30.211776 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-17 00:44:30.211784 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-17 00:44:30.211800 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-17 00:44:30.211809 | orchestrator | 2026-03-17 00:44:30.211817 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:30.211825 | orchestrator | Tuesday 17 March 2026 00:44:24 +0000 (0:00:00.394) 0:00:01.059 ********* 2026-03-17 00:44:30.211850 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:30.211859 | orchestrator | 2026-03-17 00:44:30.211867 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:30.211875 | orchestrator | Tuesday 17 March 2026 00:44:25 +0000 (0:00:00.168) 0:00:01.227 ********* 2026-03-17 00:44:30.211883 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:30.211890 | orchestrator | 2026-03-17 00:44:30.211898 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:30.211906 | orchestrator | Tuesday 17 March 2026 00:44:25 +0000 (0:00:00.169) 0:00:01.396 ********* 2026-03-17 00:44:30.211914 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:30.211921 | orchestrator | 2026-03-17 00:44:30.211929 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:30.211937 | orchestrator | Tuesday 17 March 2026 00:44:25 +0000 (0:00:00.179) 0:00:01.576 ********* 2026-03-17 00:44:30.211949 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:30.211957 | orchestrator | 2026-03-17 00:44:30.211965 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:30.211973 | orchestrator | Tuesday 17 March 2026 00:44:25 +0000 (0:00:00.186) 0:00:01.762 ********* 2026-03-17 00:44:30.211981 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:30.211989 | orchestrator | 2026-03-17 00:44:30.211999 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:30.212031 | orchestrator | Tuesday 17 March 2026 00:44:25 +0000 (0:00:00.183) 0:00:01.946 ********* 2026-03-17 00:44:30.212042 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:30.212052 | orchestrator | 2026-03-17 00:44:30.212061 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:30.212070 | orchestrator | Tuesday 17 March 2026 00:44:26 +0000 (0:00:00.174) 0:00:02.121 ********* 2026-03-17 00:44:30.212078 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:30.212085 | orchestrator | 2026-03-17 00:44:30.212093 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:30.212101 | orchestrator | Tuesday 17 March 2026 00:44:26 +0000 (0:00:00.163) 0:00:02.285 ********* 2026-03-17 00:44:30.212109 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:30.212117 | orchestrator | 2026-03-17 00:44:30.212125 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:30.212133 | orchestrator | Tuesday 17 March 2026 00:44:26 +0000 (0:00:00.178) 0:00:02.463 ********* 2026-03-17 00:44:30.212141 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069) 2026-03-17 00:44:30.212150 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069) 2026-03-17 00:44:30.212158 | orchestrator | 2026-03-17 00:44:30.212166 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:30.212188 | orchestrator | Tuesday 17 March 2026 00:44:26 +0000 (0:00:00.324) 0:00:02.788 ********* 2026-03-17 00:44:30.212196 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e46b8678-1baa-4ba8-a612-904460f97320) 2026-03-17 00:44:30.212204 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e46b8678-1baa-4ba8-a612-904460f97320) 2026-03-17 00:44:30.212212 | orchestrator | 2026-03-17 00:44:30.212220 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:30.212228 | orchestrator | Tuesday 17 March 2026 00:44:27 +0000 (0:00:00.472) 0:00:03.261 ********* 2026-03-17 00:44:30.212236 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f95d5766-a3db-4d15-9977-785c02a190f5) 2026-03-17 00:44:30.212243 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f95d5766-a3db-4d15-9977-785c02a190f5) 2026-03-17 00:44:30.212251 | orchestrator | 2026-03-17 00:44:30.212259 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:30.212267 | orchestrator | Tuesday 17 March 2026 00:44:27 +0000 (0:00:00.508) 0:00:03.769 ********* 2026-03-17 00:44:30.212293 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2854fd14-3e82-4dcb-865e-ef6e028a2c86) 2026-03-17 00:44:30.212301 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2854fd14-3e82-4dcb-865e-ef6e028a2c86) 2026-03-17 00:44:30.212309 | orchestrator | 2026-03-17 00:44:30.212317 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:30.212325 | orchestrator | Tuesday 17 March 2026 00:44:28 +0000 (0:00:00.624) 0:00:04.394 ********* 2026-03-17 00:44:30.212333 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-17 00:44:30.212340 | orchestrator | 2026-03-17 00:44:30.212348 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:30.212356 | orchestrator | Tuesday 17 March 2026 00:44:28 +0000 (0:00:00.298) 0:00:04.692 ********* 2026-03-17 00:44:30.212368 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-17 00:44:30.212376 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-17 00:44:30.212384 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-17 00:44:30.212392 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-17 00:44:30.212400 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-17 00:44:30.212407 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-17 00:44:30.212415 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-17 00:44:30.212423 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-17 00:44:30.212431 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-17 00:44:30.212438 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-17 00:44:30.212446 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-17 00:44:30.212454 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-17 00:44:30.212461 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-17 00:44:30.212469 | orchestrator | 2026-03-17 00:44:30.212477 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:30.212485 | orchestrator | Tuesday 17 March 2026 00:44:28 +0000 (0:00:00.343) 0:00:05.036 ********* 2026-03-17 00:44:30.212493 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:30.212501 | orchestrator | 2026-03-17 00:44:30.212509 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:30.212517 | orchestrator | Tuesday 17 March 2026 00:44:29 +0000 (0:00:00.180) 0:00:05.216 ********* 2026-03-17 00:44:30.212524 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:30.212532 | orchestrator | 2026-03-17 00:44:30.212540 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:30.212548 | orchestrator | Tuesday 17 March 2026 00:44:29 +0000 (0:00:00.171) 0:00:05.388 ********* 2026-03-17 00:44:30.212556 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:30.212563 | orchestrator | 2026-03-17 00:44:30.212571 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:30.212579 | orchestrator | Tuesday 17 March 2026 00:44:29 +0000 (0:00:00.183) 0:00:05.571 ********* 2026-03-17 00:44:30.212587 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:30.212595 | orchestrator | 2026-03-17 00:44:30.212602 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:30.212611 | orchestrator | Tuesday 17 March 2026 00:44:29 +0000 (0:00:00.187) 0:00:05.759 ********* 2026-03-17 00:44:30.212619 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:30.212631 | orchestrator | 2026-03-17 00:44:30.212639 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:30.212647 | orchestrator | Tuesday 17 March 2026 00:44:29 +0000 (0:00:00.177) 0:00:05.936 ********* 2026-03-17 00:44:30.212655 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:30.212663 | orchestrator | 2026-03-17 00:44:30.212671 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:30.212679 | orchestrator | Tuesday 17 March 2026 00:44:30 +0000 (0:00:00.178) 0:00:06.115 ********* 2026-03-17 00:44:30.212687 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:30.212694 | orchestrator | 2026-03-17 00:44:30.212707 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:37.350268 | orchestrator | Tuesday 17 March 2026 00:44:30 +0000 (0:00:00.168) 0:00:06.284 ********* 2026-03-17 00:44:37.350355 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:37.350366 | orchestrator | 2026-03-17 00:44:37.350374 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:37.350382 | orchestrator | Tuesday 17 March 2026 00:44:30 +0000 (0:00:00.193) 0:00:06.477 ********* 2026-03-17 00:44:37.350389 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-17 00:44:37.350396 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-17 00:44:37.350403 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-17 00:44:37.350409 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-17 00:44:37.350416 | orchestrator | 2026-03-17 00:44:37.350423 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:37.350429 | orchestrator | Tuesday 17 March 2026 00:44:31 +0000 (0:00:00.872) 0:00:07.350 ********* 2026-03-17 00:44:37.350435 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:37.350442 | orchestrator | 2026-03-17 00:44:37.350448 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:37.350455 | orchestrator | Tuesday 17 March 2026 00:44:31 +0000 (0:00:00.171) 0:00:07.521 ********* 2026-03-17 00:44:37.350461 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:37.350467 | orchestrator | 2026-03-17 00:44:37.350474 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:37.350480 | orchestrator | Tuesday 17 March 2026 00:44:31 +0000 (0:00:00.195) 0:00:07.717 ********* 2026-03-17 00:44:37.350487 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:37.350493 | orchestrator | 2026-03-17 00:44:37.350499 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:37.350506 | orchestrator | Tuesday 17 March 2026 00:44:31 +0000 (0:00:00.192) 0:00:07.910 ********* 2026-03-17 00:44:37.350512 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:37.350519 | orchestrator | 2026-03-17 00:44:37.350525 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-17 00:44:37.350531 | orchestrator | Tuesday 17 March 2026 00:44:32 +0000 (0:00:00.210) 0:00:08.120 ********* 2026-03-17 00:44:37.350538 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-17 00:44:37.350544 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-17 00:44:37.350551 | orchestrator | 2026-03-17 00:44:37.350557 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-17 00:44:37.350563 | orchestrator | Tuesday 17 March 2026 00:44:32 +0000 (0:00:00.177) 0:00:08.298 ********* 2026-03-17 00:44:37.350570 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:37.350576 | orchestrator | 2026-03-17 00:44:37.350582 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-17 00:44:37.350605 | orchestrator | Tuesday 17 March 2026 00:44:32 +0000 (0:00:00.136) 0:00:08.435 ********* 2026-03-17 00:44:37.350612 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:37.350618 | orchestrator | 2026-03-17 00:44:37.350624 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-17 00:44:37.350631 | orchestrator | Tuesday 17 March 2026 00:44:32 +0000 (0:00:00.115) 0:00:08.550 ********* 2026-03-17 00:44:37.350653 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:37.350660 | orchestrator | 2026-03-17 00:44:37.350666 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-17 00:44:37.350672 | orchestrator | Tuesday 17 March 2026 00:44:32 +0000 (0:00:00.126) 0:00:08.676 ********* 2026-03-17 00:44:37.350679 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:44:37.350685 | orchestrator | 2026-03-17 00:44:37.350692 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-17 00:44:37.350698 | orchestrator | Tuesday 17 March 2026 00:44:32 +0000 (0:00:00.122) 0:00:08.799 ********* 2026-03-17 00:44:37.350705 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b48309d9-c226-530e-bc23-6e205cf9651b'}}) 2026-03-17 00:44:37.350712 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f'}}) 2026-03-17 00:44:37.350718 | orchestrator | 2026-03-17 00:44:37.350724 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-17 00:44:37.350730 | orchestrator | Tuesday 17 March 2026 00:44:32 +0000 (0:00:00.164) 0:00:08.963 ********* 2026-03-17 00:44:37.350738 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b48309d9-c226-530e-bc23-6e205cf9651b'}})  2026-03-17 00:44:37.350750 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f'}})  2026-03-17 00:44:37.350756 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:37.350762 | orchestrator | 2026-03-17 00:44:37.350769 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-17 00:44:37.350775 | orchestrator | Tuesday 17 March 2026 00:44:33 +0000 (0:00:00.138) 0:00:09.102 ********* 2026-03-17 00:44:37.350781 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b48309d9-c226-530e-bc23-6e205cf9651b'}})  2026-03-17 00:44:37.350788 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f'}})  2026-03-17 00:44:37.350794 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:37.350800 | orchestrator | 2026-03-17 00:44:37.350807 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-17 00:44:37.350813 | orchestrator | Tuesday 17 March 2026 00:44:33 +0000 (0:00:00.328) 0:00:09.430 ********* 2026-03-17 00:44:37.350819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b48309d9-c226-530e-bc23-6e205cf9651b'}})  2026-03-17 00:44:37.350840 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f'}})  2026-03-17 00:44:37.350847 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:37.350854 | orchestrator | 2026-03-17 00:44:37.350862 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-17 00:44:37.350869 | orchestrator | Tuesday 17 March 2026 00:44:33 +0000 (0:00:00.139) 0:00:09.570 ********* 2026-03-17 00:44:37.350876 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:44:37.350883 | orchestrator | 2026-03-17 00:44:37.350903 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-17 00:44:37.350918 | orchestrator | Tuesday 17 March 2026 00:44:33 +0000 (0:00:00.160) 0:00:09.730 ********* 2026-03-17 00:44:37.350925 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:44:37.350931 | orchestrator | 2026-03-17 00:44:37.350942 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-17 00:44:37.350950 | orchestrator | Tuesday 17 March 2026 00:44:33 +0000 (0:00:00.141) 0:00:09.871 ********* 2026-03-17 00:44:37.350957 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:37.350964 | orchestrator | 2026-03-17 00:44:37.350971 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-17 00:44:37.350977 | orchestrator | Tuesday 17 March 2026 00:44:33 +0000 (0:00:00.132) 0:00:10.004 ********* 2026-03-17 00:44:37.350989 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:37.350996 | orchestrator | 2026-03-17 00:44:37.351017 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-17 00:44:37.351025 | orchestrator | Tuesday 17 March 2026 00:44:34 +0000 (0:00:00.137) 0:00:10.142 ********* 2026-03-17 00:44:37.351032 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:37.351038 | orchestrator | 2026-03-17 00:44:37.351045 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-17 00:44:37.351052 | orchestrator | Tuesday 17 March 2026 00:44:34 +0000 (0:00:00.145) 0:00:10.287 ********* 2026-03-17 00:44:37.351059 | orchestrator | ok: [testbed-node-3] => { 2026-03-17 00:44:37.351066 | orchestrator |  "ceph_osd_devices": { 2026-03-17 00:44:37.351073 | orchestrator |  "sdb": { 2026-03-17 00:44:37.351080 | orchestrator |  "osd_lvm_uuid": "b48309d9-c226-530e-bc23-6e205cf9651b" 2026-03-17 00:44:37.351087 | orchestrator |  }, 2026-03-17 00:44:37.351094 | orchestrator |  "sdc": { 2026-03-17 00:44:37.351101 | orchestrator |  "osd_lvm_uuid": "6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f" 2026-03-17 00:44:37.351107 | orchestrator |  } 2026-03-17 00:44:37.351114 | orchestrator |  } 2026-03-17 00:44:37.351121 | orchestrator | } 2026-03-17 00:44:37.351128 | orchestrator | 2026-03-17 00:44:37.351135 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-17 00:44:37.351141 | orchestrator | Tuesday 17 March 2026 00:44:34 +0000 (0:00:00.134) 0:00:10.421 ********* 2026-03-17 00:44:37.351148 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:37.351155 | orchestrator | 2026-03-17 00:44:37.351162 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-17 00:44:37.351169 | orchestrator | Tuesday 17 March 2026 00:44:34 +0000 (0:00:00.136) 0:00:10.557 ********* 2026-03-17 00:44:37.351175 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:37.351182 | orchestrator | 2026-03-17 00:44:37.351189 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-17 00:44:37.351196 | orchestrator | Tuesday 17 March 2026 00:44:34 +0000 (0:00:00.133) 0:00:10.691 ********* 2026-03-17 00:44:37.351203 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:37.351210 | orchestrator | 2026-03-17 00:44:37.351217 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-17 00:44:37.351224 | orchestrator | Tuesday 17 March 2026 00:44:34 +0000 (0:00:00.117) 0:00:10.809 ********* 2026-03-17 00:44:37.351231 | orchestrator | changed: [testbed-node-3] => { 2026-03-17 00:44:37.351237 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-17 00:44:37.351243 | orchestrator |  "ceph_osd_devices": { 2026-03-17 00:44:37.351249 | orchestrator |  "sdb": { 2026-03-17 00:44:37.351256 | orchestrator |  "osd_lvm_uuid": "b48309d9-c226-530e-bc23-6e205cf9651b" 2026-03-17 00:44:37.351262 | orchestrator |  }, 2026-03-17 00:44:37.351268 | orchestrator |  "sdc": { 2026-03-17 00:44:37.351274 | orchestrator |  "osd_lvm_uuid": "6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f" 2026-03-17 00:44:37.351281 | orchestrator |  } 2026-03-17 00:44:37.351287 | orchestrator |  }, 2026-03-17 00:44:37.351293 | orchestrator |  "lvm_volumes": [ 2026-03-17 00:44:37.351299 | orchestrator |  { 2026-03-17 00:44:37.351305 | orchestrator |  "data": "osd-block-b48309d9-c226-530e-bc23-6e205cf9651b", 2026-03-17 00:44:37.351311 | orchestrator |  "data_vg": "ceph-b48309d9-c226-530e-bc23-6e205cf9651b" 2026-03-17 00:44:37.351318 | orchestrator |  }, 2026-03-17 00:44:37.351324 | orchestrator |  { 2026-03-17 00:44:37.351330 | orchestrator |  "data": "osd-block-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f", 2026-03-17 00:44:37.351336 | orchestrator |  "data_vg": "ceph-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f" 2026-03-17 00:44:37.351342 | orchestrator |  } 2026-03-17 00:44:37.351348 | orchestrator |  ] 2026-03-17 00:44:37.351354 | orchestrator |  } 2026-03-17 00:44:37.351361 | orchestrator | } 2026-03-17 00:44:37.351371 | orchestrator | 2026-03-17 00:44:37.351377 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-17 00:44:37.351384 | orchestrator | Tuesday 17 March 2026 00:44:35 +0000 (0:00:00.392) 0:00:11.202 ********* 2026-03-17 00:44:37.351390 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-17 00:44:37.351396 | orchestrator | 2026-03-17 00:44:37.351406 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-17 00:44:37.351412 | orchestrator | 2026-03-17 00:44:37.351418 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-17 00:44:37.351424 | orchestrator | Tuesday 17 March 2026 00:44:36 +0000 (0:00:01.758) 0:00:12.960 ********* 2026-03-17 00:44:37.351430 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-17 00:44:37.351436 | orchestrator | 2026-03-17 00:44:37.351442 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-17 00:44:37.351449 | orchestrator | Tuesday 17 March 2026 00:44:37 +0000 (0:00:00.229) 0:00:13.189 ********* 2026-03-17 00:44:37.351455 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:44:37.351461 | orchestrator | 2026-03-17 00:44:37.351471 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:44.370743 | orchestrator | Tuesday 17 March 2026 00:44:37 +0000 (0:00:00.231) 0:00:13.421 ********* 2026-03-17 00:44:44.370837 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-17 00:44:44.370850 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-17 00:44:44.370859 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-17 00:44:44.370867 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-17 00:44:44.370876 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-17 00:44:44.370884 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-17 00:44:44.370893 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-17 00:44:44.370901 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-17 00:44:44.370909 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-17 00:44:44.370917 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-17 00:44:44.370925 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-17 00:44:44.370933 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-17 00:44:44.370944 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-17 00:44:44.370953 | orchestrator | 2026-03-17 00:44:44.370963 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:44.370971 | orchestrator | Tuesday 17 March 2026 00:44:37 +0000 (0:00:00.353) 0:00:13.774 ********* 2026-03-17 00:44:44.370979 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:44.370988 | orchestrator | 2026-03-17 00:44:44.371066 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:44.371075 | orchestrator | Tuesday 17 March 2026 00:44:37 +0000 (0:00:00.208) 0:00:13.983 ********* 2026-03-17 00:44:44.371083 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:44.371091 | orchestrator | 2026-03-17 00:44:44.371098 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:44.371106 | orchestrator | Tuesday 17 March 2026 00:44:38 +0000 (0:00:00.190) 0:00:14.174 ********* 2026-03-17 00:44:44.371114 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:44.371122 | orchestrator | 2026-03-17 00:44:44.371130 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:44.371138 | orchestrator | Tuesday 17 March 2026 00:44:38 +0000 (0:00:00.188) 0:00:14.362 ********* 2026-03-17 00:44:44.371166 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:44.371174 | orchestrator | 2026-03-17 00:44:44.371182 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:44.371190 | orchestrator | Tuesday 17 March 2026 00:44:38 +0000 (0:00:00.164) 0:00:14.527 ********* 2026-03-17 00:44:44.371198 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:44.371206 | orchestrator | 2026-03-17 00:44:44.371214 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:44.371221 | orchestrator | Tuesday 17 March 2026 00:44:39 +0000 (0:00:00.627) 0:00:15.155 ********* 2026-03-17 00:44:44.371229 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:44.371237 | orchestrator | 2026-03-17 00:44:44.371245 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:44.371252 | orchestrator | Tuesday 17 March 2026 00:44:39 +0000 (0:00:00.198) 0:00:15.353 ********* 2026-03-17 00:44:44.371260 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:44.371268 | orchestrator | 2026-03-17 00:44:44.371276 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:44.371284 | orchestrator | Tuesday 17 March 2026 00:44:39 +0000 (0:00:00.224) 0:00:15.578 ********* 2026-03-17 00:44:44.371292 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:44.371301 | orchestrator | 2026-03-17 00:44:44.371326 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:44.371335 | orchestrator | Tuesday 17 March 2026 00:44:39 +0000 (0:00:00.157) 0:00:15.736 ********* 2026-03-17 00:44:44.371343 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35) 2026-03-17 00:44:44.371353 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35) 2026-03-17 00:44:44.371362 | orchestrator | 2026-03-17 00:44:44.371371 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:44.371380 | orchestrator | Tuesday 17 March 2026 00:44:40 +0000 (0:00:00.360) 0:00:16.097 ********* 2026-03-17 00:44:44.371389 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9ec754d5-296d-4a8a-b6d8-e4830272a171) 2026-03-17 00:44:44.371398 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9ec754d5-296d-4a8a-b6d8-e4830272a171) 2026-03-17 00:44:44.371407 | orchestrator | 2026-03-17 00:44:44.371416 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:44.371424 | orchestrator | Tuesday 17 March 2026 00:44:40 +0000 (0:00:00.377) 0:00:16.475 ********* 2026-03-17 00:44:44.371433 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d8ebe49d-b73b-4490-897b-f13bdc67f86d) 2026-03-17 00:44:44.371442 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d8ebe49d-b73b-4490-897b-f13bdc67f86d) 2026-03-17 00:44:44.371451 | orchestrator | 2026-03-17 00:44:44.371460 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:44.371482 | orchestrator | Tuesday 17 March 2026 00:44:40 +0000 (0:00:00.399) 0:00:16.874 ********* 2026-03-17 00:44:44.371491 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f91ef76e-9f0f-49ef-bc09-7b70daad6579) 2026-03-17 00:44:44.371500 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f91ef76e-9f0f-49ef-bc09-7b70daad6579) 2026-03-17 00:44:44.371509 | orchestrator | 2026-03-17 00:44:44.371519 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:44.371528 | orchestrator | Tuesday 17 March 2026 00:44:41 +0000 (0:00:00.385) 0:00:17.259 ********* 2026-03-17 00:44:44.371537 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-17 00:44:44.371546 | orchestrator | 2026-03-17 00:44:44.371555 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:44.371564 | orchestrator | Tuesday 17 March 2026 00:44:41 +0000 (0:00:00.286) 0:00:17.545 ********* 2026-03-17 00:44:44.371573 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-17 00:44:44.371588 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-17 00:44:44.371597 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-17 00:44:44.371606 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-17 00:44:44.371614 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-17 00:44:44.371623 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-17 00:44:44.371632 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-17 00:44:44.371641 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-17 00:44:44.371650 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-17 00:44:44.371659 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-17 00:44:44.371668 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-17 00:44:44.371676 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-17 00:44:44.371686 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-17 00:44:44.371699 | orchestrator | 2026-03-17 00:44:44.371713 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:44.371725 | orchestrator | Tuesday 17 March 2026 00:44:41 +0000 (0:00:00.329) 0:00:17.875 ********* 2026-03-17 00:44:44.371738 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:44.371751 | orchestrator | 2026-03-17 00:44:44.371764 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:44.371776 | orchestrator | Tuesday 17 March 2026 00:44:42 +0000 (0:00:00.502) 0:00:18.377 ********* 2026-03-17 00:44:44.371789 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:44.371802 | orchestrator | 2026-03-17 00:44:44.371810 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:44.371818 | orchestrator | Tuesday 17 March 2026 00:44:42 +0000 (0:00:00.175) 0:00:18.553 ********* 2026-03-17 00:44:44.371826 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:44.371834 | orchestrator | 2026-03-17 00:44:44.371841 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:44.371849 | orchestrator | Tuesday 17 March 2026 00:44:42 +0000 (0:00:00.154) 0:00:18.708 ********* 2026-03-17 00:44:44.371862 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:44.371875 | orchestrator | 2026-03-17 00:44:44.371887 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:44.371922 | orchestrator | Tuesday 17 March 2026 00:44:42 +0000 (0:00:00.184) 0:00:18.892 ********* 2026-03-17 00:44:44.371934 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:44.371948 | orchestrator | 2026-03-17 00:44:44.371962 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:44.371974 | orchestrator | Tuesday 17 March 2026 00:44:42 +0000 (0:00:00.151) 0:00:19.044 ********* 2026-03-17 00:44:44.371986 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:44.372018 | orchestrator | 2026-03-17 00:44:44.372028 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:44.372036 | orchestrator | Tuesday 17 March 2026 00:44:43 +0000 (0:00:00.181) 0:00:19.225 ********* 2026-03-17 00:44:44.372044 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:44.372051 | orchestrator | 2026-03-17 00:44:44.372059 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:44.372067 | orchestrator | Tuesday 17 March 2026 00:44:43 +0000 (0:00:00.163) 0:00:19.388 ********* 2026-03-17 00:44:44.372075 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:44.372090 | orchestrator | 2026-03-17 00:44:44.372098 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:44.372106 | orchestrator | Tuesday 17 March 2026 00:44:43 +0000 (0:00:00.173) 0:00:19.561 ********* 2026-03-17 00:44:44.372114 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-17 00:44:44.372123 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-17 00:44:44.372131 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-17 00:44:44.372139 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-17 00:44:44.372146 | orchestrator | 2026-03-17 00:44:44.372154 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:44.372162 | orchestrator | Tuesday 17 March 2026 00:44:44 +0000 (0:00:00.707) 0:00:20.269 ********* 2026-03-17 00:44:44.372170 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:49.666684 | orchestrator | 2026-03-17 00:44:49.666809 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:49.666828 | orchestrator | Tuesday 17 March 2026 00:44:44 +0000 (0:00:00.175) 0:00:20.445 ********* 2026-03-17 00:44:49.666841 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:49.666853 | orchestrator | 2026-03-17 00:44:49.666865 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:49.666876 | orchestrator | Tuesday 17 March 2026 00:44:44 +0000 (0:00:00.174) 0:00:20.619 ********* 2026-03-17 00:44:49.666887 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:49.666898 | orchestrator | 2026-03-17 00:44:49.666909 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:49.666920 | orchestrator | Tuesday 17 March 2026 00:44:44 +0000 (0:00:00.172) 0:00:20.791 ********* 2026-03-17 00:44:49.666931 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:49.666941 | orchestrator | 2026-03-17 00:44:49.666952 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-17 00:44:49.666963 | orchestrator | Tuesday 17 March 2026 00:44:45 +0000 (0:00:00.509) 0:00:21.301 ********* 2026-03-17 00:44:49.666974 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-17 00:44:49.667037 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-17 00:44:49.667053 | orchestrator | 2026-03-17 00:44:49.667064 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-17 00:44:49.667074 | orchestrator | Tuesday 17 March 2026 00:44:45 +0000 (0:00:00.147) 0:00:21.449 ********* 2026-03-17 00:44:49.667085 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:49.667108 | orchestrator | 2026-03-17 00:44:49.667120 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-17 00:44:49.667131 | orchestrator | Tuesday 17 March 2026 00:44:45 +0000 (0:00:00.117) 0:00:21.567 ********* 2026-03-17 00:44:49.667142 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:49.667153 | orchestrator | 2026-03-17 00:44:49.667164 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-17 00:44:49.667174 | orchestrator | Tuesday 17 March 2026 00:44:45 +0000 (0:00:00.109) 0:00:21.676 ********* 2026-03-17 00:44:49.667185 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:49.667196 | orchestrator | 2026-03-17 00:44:49.667207 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-17 00:44:49.667220 | orchestrator | Tuesday 17 March 2026 00:44:45 +0000 (0:00:00.104) 0:00:21.780 ********* 2026-03-17 00:44:49.667232 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:44:49.667246 | orchestrator | 2026-03-17 00:44:49.667258 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-17 00:44:49.667270 | orchestrator | Tuesday 17 March 2026 00:44:45 +0000 (0:00:00.114) 0:00:21.894 ********* 2026-03-17 00:44:49.667283 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '13f697f5-12ba-5526-98d1-b1a9c265f800'}}) 2026-03-17 00:44:49.667297 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a0cc3c10-edeb-5a7b-849a-4273befffbf6'}}) 2026-03-17 00:44:49.667340 | orchestrator | 2026-03-17 00:44:49.667358 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-17 00:44:49.667376 | orchestrator | Tuesday 17 March 2026 00:44:45 +0000 (0:00:00.130) 0:00:22.024 ********* 2026-03-17 00:44:49.667395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '13f697f5-12ba-5526-98d1-b1a9c265f800'}})  2026-03-17 00:44:49.667417 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a0cc3c10-edeb-5a7b-849a-4273befffbf6'}})  2026-03-17 00:44:49.667436 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:49.667458 | orchestrator | 2026-03-17 00:44:49.667479 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-17 00:44:49.667500 | orchestrator | Tuesday 17 March 2026 00:44:46 +0000 (0:00:00.121) 0:00:22.146 ********* 2026-03-17 00:44:49.667522 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '13f697f5-12ba-5526-98d1-b1a9c265f800'}})  2026-03-17 00:44:49.667567 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a0cc3c10-edeb-5a7b-849a-4273befffbf6'}})  2026-03-17 00:44:49.667589 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:49.667611 | orchestrator | 2026-03-17 00:44:49.667632 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-17 00:44:49.667653 | orchestrator | Tuesday 17 March 2026 00:44:46 +0000 (0:00:00.136) 0:00:22.282 ********* 2026-03-17 00:44:49.667675 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '13f697f5-12ba-5526-98d1-b1a9c265f800'}})  2026-03-17 00:44:49.667698 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a0cc3c10-edeb-5a7b-849a-4273befffbf6'}})  2026-03-17 00:44:49.667721 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:49.667741 | orchestrator | 2026-03-17 00:44:49.667763 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-17 00:44:49.667784 | orchestrator | Tuesday 17 March 2026 00:44:46 +0000 (0:00:00.130) 0:00:22.412 ********* 2026-03-17 00:44:49.667807 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:44:49.667827 | orchestrator | 2026-03-17 00:44:49.667846 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-17 00:44:49.667866 | orchestrator | Tuesday 17 March 2026 00:44:46 +0000 (0:00:00.119) 0:00:22.532 ********* 2026-03-17 00:44:49.667887 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:44:49.667898 | orchestrator | 2026-03-17 00:44:49.667909 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-17 00:44:49.667919 | orchestrator | Tuesday 17 March 2026 00:44:46 +0000 (0:00:00.123) 0:00:22.655 ********* 2026-03-17 00:44:49.667952 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:49.667963 | orchestrator | 2026-03-17 00:44:49.667974 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-17 00:44:49.668012 | orchestrator | Tuesday 17 March 2026 00:44:46 +0000 (0:00:00.247) 0:00:22.903 ********* 2026-03-17 00:44:49.668027 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:49.668038 | orchestrator | 2026-03-17 00:44:49.668049 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-17 00:44:49.668060 | orchestrator | Tuesday 17 March 2026 00:44:46 +0000 (0:00:00.127) 0:00:23.030 ********* 2026-03-17 00:44:49.668071 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:49.668082 | orchestrator | 2026-03-17 00:44:49.668092 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-17 00:44:49.668103 | orchestrator | Tuesday 17 March 2026 00:44:47 +0000 (0:00:00.116) 0:00:23.147 ********* 2026-03-17 00:44:49.668114 | orchestrator | ok: [testbed-node-4] => { 2026-03-17 00:44:49.668125 | orchestrator |  "ceph_osd_devices": { 2026-03-17 00:44:49.668136 | orchestrator |  "sdb": { 2026-03-17 00:44:49.668148 | orchestrator |  "osd_lvm_uuid": "13f697f5-12ba-5526-98d1-b1a9c265f800" 2026-03-17 00:44:49.668158 | orchestrator |  }, 2026-03-17 00:44:49.668182 | orchestrator |  "sdc": { 2026-03-17 00:44:49.668193 | orchestrator |  "osd_lvm_uuid": "a0cc3c10-edeb-5a7b-849a-4273befffbf6" 2026-03-17 00:44:49.668204 | orchestrator |  } 2026-03-17 00:44:49.668215 | orchestrator |  } 2026-03-17 00:44:49.668226 | orchestrator | } 2026-03-17 00:44:49.668237 | orchestrator | 2026-03-17 00:44:49.668248 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-17 00:44:49.668259 | orchestrator | Tuesday 17 March 2026 00:44:47 +0000 (0:00:00.113) 0:00:23.261 ********* 2026-03-17 00:44:49.668269 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:49.668280 | orchestrator | 2026-03-17 00:44:49.668291 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-17 00:44:49.668301 | orchestrator | Tuesday 17 March 2026 00:44:47 +0000 (0:00:00.109) 0:00:23.370 ********* 2026-03-17 00:44:49.668312 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:49.668323 | orchestrator | 2026-03-17 00:44:49.668334 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-17 00:44:49.668344 | orchestrator | Tuesday 17 March 2026 00:44:47 +0000 (0:00:00.107) 0:00:23.478 ********* 2026-03-17 00:44:49.668355 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:49.668366 | orchestrator | 2026-03-17 00:44:49.668377 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-17 00:44:49.668388 | orchestrator | Tuesday 17 March 2026 00:44:47 +0000 (0:00:00.128) 0:00:23.606 ********* 2026-03-17 00:44:49.668398 | orchestrator | changed: [testbed-node-4] => { 2026-03-17 00:44:49.668409 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-17 00:44:49.668420 | orchestrator |  "ceph_osd_devices": { 2026-03-17 00:44:49.668431 | orchestrator |  "sdb": { 2026-03-17 00:44:49.668442 | orchestrator |  "osd_lvm_uuid": "13f697f5-12ba-5526-98d1-b1a9c265f800" 2026-03-17 00:44:49.668453 | orchestrator |  }, 2026-03-17 00:44:49.668464 | orchestrator |  "sdc": { 2026-03-17 00:44:49.668475 | orchestrator |  "osd_lvm_uuid": "a0cc3c10-edeb-5a7b-849a-4273befffbf6" 2026-03-17 00:44:49.668485 | orchestrator |  } 2026-03-17 00:44:49.668496 | orchestrator |  }, 2026-03-17 00:44:49.668507 | orchestrator |  "lvm_volumes": [ 2026-03-17 00:44:49.668518 | orchestrator |  { 2026-03-17 00:44:49.668529 | orchestrator |  "data": "osd-block-13f697f5-12ba-5526-98d1-b1a9c265f800", 2026-03-17 00:44:49.668540 | orchestrator |  "data_vg": "ceph-13f697f5-12ba-5526-98d1-b1a9c265f800" 2026-03-17 00:44:49.668551 | orchestrator |  }, 2026-03-17 00:44:49.668561 | orchestrator |  { 2026-03-17 00:44:49.668572 | orchestrator |  "data": "osd-block-a0cc3c10-edeb-5a7b-849a-4273befffbf6", 2026-03-17 00:44:49.668583 | orchestrator |  "data_vg": "ceph-a0cc3c10-edeb-5a7b-849a-4273befffbf6" 2026-03-17 00:44:49.668594 | orchestrator |  } 2026-03-17 00:44:49.668604 | orchestrator |  ] 2026-03-17 00:44:49.668615 | orchestrator |  } 2026-03-17 00:44:49.668626 | orchestrator | } 2026-03-17 00:44:49.668636 | orchestrator | 2026-03-17 00:44:49.668647 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-17 00:44:49.668658 | orchestrator | Tuesday 17 March 2026 00:44:47 +0000 (0:00:00.222) 0:00:23.829 ********* 2026-03-17 00:44:49.668669 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-17 00:44:49.668680 | orchestrator | 2026-03-17 00:44:49.668691 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-17 00:44:49.668701 | orchestrator | 2026-03-17 00:44:49.668712 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-17 00:44:49.668723 | orchestrator | Tuesday 17 March 2026 00:44:48 +0000 (0:00:00.888) 0:00:24.718 ********* 2026-03-17 00:44:49.668733 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-17 00:44:49.668744 | orchestrator | 2026-03-17 00:44:49.668755 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-17 00:44:49.668773 | orchestrator | Tuesday 17 March 2026 00:44:49 +0000 (0:00:00.505) 0:00:25.223 ********* 2026-03-17 00:44:49.668784 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:44:49.668794 | orchestrator | 2026-03-17 00:44:49.668805 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:49.668816 | orchestrator | Tuesday 17 March 2026 00:44:49 +0000 (0:00:00.228) 0:00:25.451 ********* 2026-03-17 00:44:49.668827 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-17 00:44:49.668838 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-17 00:44:49.668856 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-17 00:44:49.668868 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-17 00:44:49.668878 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-17 00:44:49.668897 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-17 00:44:56.635822 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-17 00:44:56.635909 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-17 00:44:56.635919 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-17 00:44:56.635927 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-17 00:44:56.635934 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-17 00:44:56.635941 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-17 00:44:56.635948 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-17 00:44:56.635954 | orchestrator | 2026-03-17 00:44:56.635962 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:56.635970 | orchestrator | Tuesday 17 March 2026 00:44:49 +0000 (0:00:00.286) 0:00:25.738 ********* 2026-03-17 00:44:56.636051 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:44:56.636061 | orchestrator | 2026-03-17 00:44:56.636069 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:56.636076 | orchestrator | Tuesday 17 March 2026 00:44:49 +0000 (0:00:00.176) 0:00:25.915 ********* 2026-03-17 00:44:56.636083 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:44:56.636089 | orchestrator | 2026-03-17 00:44:56.636096 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:56.636103 | orchestrator | Tuesday 17 March 2026 00:44:49 +0000 (0:00:00.161) 0:00:26.076 ********* 2026-03-17 00:44:56.636109 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:44:56.636116 | orchestrator | 2026-03-17 00:44:56.636123 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:56.636130 | orchestrator | Tuesday 17 March 2026 00:44:50 +0000 (0:00:00.169) 0:00:26.246 ********* 2026-03-17 00:44:56.636136 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:44:56.636143 | orchestrator | 2026-03-17 00:44:56.636150 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:56.636157 | orchestrator | Tuesday 17 March 2026 00:44:50 +0000 (0:00:00.155) 0:00:26.401 ********* 2026-03-17 00:44:56.636163 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:44:56.636170 | orchestrator | 2026-03-17 00:44:56.636176 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:56.636183 | orchestrator | Tuesday 17 March 2026 00:44:50 +0000 (0:00:00.184) 0:00:26.586 ********* 2026-03-17 00:44:56.636190 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:44:56.636196 | orchestrator | 2026-03-17 00:44:56.636203 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:56.636210 | orchestrator | Tuesday 17 March 2026 00:44:50 +0000 (0:00:00.227) 0:00:26.813 ********* 2026-03-17 00:44:56.636234 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:44:56.636241 | orchestrator | 2026-03-17 00:44:56.636248 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:56.636254 | orchestrator | Tuesday 17 March 2026 00:44:50 +0000 (0:00:00.172) 0:00:26.986 ********* 2026-03-17 00:44:56.636261 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:44:56.636267 | orchestrator | 2026-03-17 00:44:56.636274 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:56.636281 | orchestrator | Tuesday 17 March 2026 00:44:51 +0000 (0:00:00.169) 0:00:27.155 ********* 2026-03-17 00:44:56.636288 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f) 2026-03-17 00:44:56.636295 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f) 2026-03-17 00:44:56.636302 | orchestrator | 2026-03-17 00:44:56.636308 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:56.636315 | orchestrator | Tuesday 17 March 2026 00:44:51 +0000 (0:00:00.632) 0:00:27.788 ********* 2026-03-17 00:44:56.636322 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a7deaf5a-cd70-43cd-92ab-ee3441c5e54f) 2026-03-17 00:44:56.636328 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a7deaf5a-cd70-43cd-92ab-ee3441c5e54f) 2026-03-17 00:44:56.636335 | orchestrator | 2026-03-17 00:44:56.636341 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:56.636348 | orchestrator | Tuesday 17 March 2026 00:44:52 +0000 (0:00:00.362) 0:00:28.151 ********* 2026-03-17 00:44:56.636354 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_dd7becb9-0584-4efc-8944-d51272ed61fa) 2026-03-17 00:44:56.636361 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_dd7becb9-0584-4efc-8944-d51272ed61fa) 2026-03-17 00:44:56.636367 | orchestrator | 2026-03-17 00:44:56.636374 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:56.636382 | orchestrator | Tuesday 17 March 2026 00:44:52 +0000 (0:00:00.409) 0:00:28.561 ********* 2026-03-17 00:44:56.636390 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0a90ba68-315a-4ce4-a803-8ffceb4dacc1) 2026-03-17 00:44:56.636402 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0a90ba68-315a-4ce4-a803-8ffceb4dacc1) 2026-03-17 00:44:56.636414 | orchestrator | 2026-03-17 00:44:56.636426 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:44:56.636439 | orchestrator | Tuesday 17 March 2026 00:44:52 +0000 (0:00:00.374) 0:00:28.935 ********* 2026-03-17 00:44:56.636449 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-17 00:44:56.636461 | orchestrator | 2026-03-17 00:44:56.636473 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:56.636500 | orchestrator | Tuesday 17 March 2026 00:44:53 +0000 (0:00:00.297) 0:00:29.232 ********* 2026-03-17 00:44:56.636513 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-17 00:44:56.636526 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-17 00:44:56.636540 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-17 00:44:56.636553 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-17 00:44:56.636564 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-17 00:44:56.636571 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-17 00:44:56.636579 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-17 00:44:56.636587 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-17 00:44:56.636602 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-17 00:44:56.636610 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-17 00:44:56.636617 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-17 00:44:56.636638 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-17 00:44:56.636645 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-17 00:44:56.636652 | orchestrator | 2026-03-17 00:44:56.636660 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:56.636667 | orchestrator | Tuesday 17 March 2026 00:44:53 +0000 (0:00:00.316) 0:00:29.549 ********* 2026-03-17 00:44:56.636675 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:44:56.636682 | orchestrator | 2026-03-17 00:44:56.636690 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:56.636698 | orchestrator | Tuesday 17 March 2026 00:44:53 +0000 (0:00:00.231) 0:00:29.781 ********* 2026-03-17 00:44:56.636705 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:44:56.636713 | orchestrator | 2026-03-17 00:44:56.636720 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:56.636726 | orchestrator | Tuesday 17 March 2026 00:44:53 +0000 (0:00:00.199) 0:00:29.981 ********* 2026-03-17 00:44:56.636736 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:44:56.636742 | orchestrator | 2026-03-17 00:44:56.636749 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:56.636756 | orchestrator | Tuesday 17 March 2026 00:44:54 +0000 (0:00:00.156) 0:00:30.137 ********* 2026-03-17 00:44:56.636762 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:44:56.636769 | orchestrator | 2026-03-17 00:44:56.636775 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:56.636782 | orchestrator | Tuesday 17 March 2026 00:44:54 +0000 (0:00:00.153) 0:00:30.290 ********* 2026-03-17 00:44:56.636788 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:44:56.636795 | orchestrator | 2026-03-17 00:44:56.636801 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:56.636808 | orchestrator | Tuesday 17 March 2026 00:44:54 +0000 (0:00:00.152) 0:00:30.443 ********* 2026-03-17 00:44:56.636815 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:44:56.636821 | orchestrator | 2026-03-17 00:44:56.636828 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:56.636834 | orchestrator | Tuesday 17 March 2026 00:44:54 +0000 (0:00:00.530) 0:00:30.973 ********* 2026-03-17 00:44:56.636841 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:44:56.636847 | orchestrator | 2026-03-17 00:44:56.636854 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:56.636860 | orchestrator | Tuesday 17 March 2026 00:44:55 +0000 (0:00:00.177) 0:00:31.151 ********* 2026-03-17 00:44:56.636867 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:44:56.636873 | orchestrator | 2026-03-17 00:44:56.636880 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:56.636886 | orchestrator | Tuesday 17 March 2026 00:44:55 +0000 (0:00:00.168) 0:00:31.319 ********* 2026-03-17 00:44:56.636893 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-17 00:44:56.636900 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-17 00:44:56.636907 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-17 00:44:56.636913 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-17 00:44:56.636920 | orchestrator | 2026-03-17 00:44:56.636926 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:56.636933 | orchestrator | Tuesday 17 March 2026 00:44:55 +0000 (0:00:00.590) 0:00:31.909 ********* 2026-03-17 00:44:56.636939 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:44:56.636946 | orchestrator | 2026-03-17 00:44:56.636958 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:56.636965 | orchestrator | Tuesday 17 March 2026 00:44:56 +0000 (0:00:00.205) 0:00:32.115 ********* 2026-03-17 00:44:56.636971 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:44:56.636995 | orchestrator | 2026-03-17 00:44:56.637002 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:56.637009 | orchestrator | Tuesday 17 March 2026 00:44:56 +0000 (0:00:00.215) 0:00:32.330 ********* 2026-03-17 00:44:56.637016 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:44:56.637022 | orchestrator | 2026-03-17 00:44:56.637029 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:44:56.637035 | orchestrator | Tuesday 17 March 2026 00:44:56 +0000 (0:00:00.212) 0:00:32.543 ********* 2026-03-17 00:44:56.637042 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:44:56.637048 | orchestrator | 2026-03-17 00:44:56.637061 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-17 00:45:00.101464 | orchestrator | Tuesday 17 March 2026 00:44:56 +0000 (0:00:00.162) 0:00:32.706 ********* 2026-03-17 00:45:00.101575 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-17 00:45:00.101594 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-17 00:45:00.101607 | orchestrator | 2026-03-17 00:45:00.101620 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-17 00:45:00.101631 | orchestrator | Tuesday 17 March 2026 00:44:56 +0000 (0:00:00.135) 0:00:32.841 ********* 2026-03-17 00:45:00.101643 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:00.101654 | orchestrator | 2026-03-17 00:45:00.101665 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-17 00:45:00.101676 | orchestrator | Tuesday 17 March 2026 00:44:56 +0000 (0:00:00.103) 0:00:32.944 ********* 2026-03-17 00:45:00.101686 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:00.101697 | orchestrator | 2026-03-17 00:45:00.101708 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-17 00:45:00.101719 | orchestrator | Tuesday 17 March 2026 00:44:56 +0000 (0:00:00.102) 0:00:33.047 ********* 2026-03-17 00:45:00.101729 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:00.101740 | orchestrator | 2026-03-17 00:45:00.101750 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-17 00:45:00.101761 | orchestrator | Tuesday 17 March 2026 00:44:57 +0000 (0:00:00.242) 0:00:33.289 ********* 2026-03-17 00:45:00.101772 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:45:00.101783 | orchestrator | 2026-03-17 00:45:00.101794 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-17 00:45:00.101806 | orchestrator | Tuesday 17 March 2026 00:44:57 +0000 (0:00:00.117) 0:00:33.407 ********* 2026-03-17 00:45:00.101817 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6d2c3af9-2510-58af-8cf3-0edda6a2b7a0'}}) 2026-03-17 00:45:00.101828 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bc85b6b7-69fe-55db-81a6-3a78775dfc6c'}}) 2026-03-17 00:45:00.101839 | orchestrator | 2026-03-17 00:45:00.101850 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-17 00:45:00.101860 | orchestrator | Tuesday 17 March 2026 00:44:57 +0000 (0:00:00.138) 0:00:33.545 ********* 2026-03-17 00:45:00.101872 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6d2c3af9-2510-58af-8cf3-0edda6a2b7a0'}})  2026-03-17 00:45:00.101884 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bc85b6b7-69fe-55db-81a6-3a78775dfc6c'}})  2026-03-17 00:45:00.101895 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:00.101905 | orchestrator | 2026-03-17 00:45:00.101916 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-17 00:45:00.101927 | orchestrator | Tuesday 17 March 2026 00:44:57 +0000 (0:00:00.110) 0:00:33.655 ********* 2026-03-17 00:45:00.101938 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6d2c3af9-2510-58af-8cf3-0edda6a2b7a0'}})  2026-03-17 00:45:00.102117 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bc85b6b7-69fe-55db-81a6-3a78775dfc6c'}})  2026-03-17 00:45:00.102138 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:00.102150 | orchestrator | 2026-03-17 00:45:00.102163 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-17 00:45:00.102175 | orchestrator | Tuesday 17 March 2026 00:44:57 +0000 (0:00:00.130) 0:00:33.786 ********* 2026-03-17 00:45:00.102188 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6d2c3af9-2510-58af-8cf3-0edda6a2b7a0'}})  2026-03-17 00:45:00.102201 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bc85b6b7-69fe-55db-81a6-3a78775dfc6c'}})  2026-03-17 00:45:00.102214 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:00.102226 | orchestrator | 2026-03-17 00:45:00.102238 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-17 00:45:00.102250 | orchestrator | Tuesday 17 March 2026 00:44:57 +0000 (0:00:00.124) 0:00:33.911 ********* 2026-03-17 00:45:00.102263 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:45:00.102275 | orchestrator | 2026-03-17 00:45:00.102287 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-17 00:45:00.102300 | orchestrator | Tuesday 17 March 2026 00:44:57 +0000 (0:00:00.128) 0:00:34.040 ********* 2026-03-17 00:45:00.102312 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:45:00.102325 | orchestrator | 2026-03-17 00:45:00.102355 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-17 00:45:00.102369 | orchestrator | Tuesday 17 March 2026 00:44:58 +0000 (0:00:00.114) 0:00:34.154 ********* 2026-03-17 00:45:00.102382 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:00.102394 | orchestrator | 2026-03-17 00:45:00.102405 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-17 00:45:00.102415 | orchestrator | Tuesday 17 March 2026 00:44:58 +0000 (0:00:00.098) 0:00:34.253 ********* 2026-03-17 00:45:00.102426 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:00.102436 | orchestrator | 2026-03-17 00:45:00.102447 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-17 00:45:00.102458 | orchestrator | Tuesday 17 March 2026 00:44:58 +0000 (0:00:00.174) 0:00:34.427 ********* 2026-03-17 00:45:00.102472 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:00.102491 | orchestrator | 2026-03-17 00:45:00.102509 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-17 00:45:00.102521 | orchestrator | Tuesday 17 March 2026 00:44:58 +0000 (0:00:00.120) 0:00:34.548 ********* 2026-03-17 00:45:00.102531 | orchestrator | ok: [testbed-node-5] => { 2026-03-17 00:45:00.102542 | orchestrator |  "ceph_osd_devices": { 2026-03-17 00:45:00.102553 | orchestrator |  "sdb": { 2026-03-17 00:45:00.102584 | orchestrator |  "osd_lvm_uuid": "6d2c3af9-2510-58af-8cf3-0edda6a2b7a0" 2026-03-17 00:45:00.102596 | orchestrator |  }, 2026-03-17 00:45:00.102607 | orchestrator |  "sdc": { 2026-03-17 00:45:00.102618 | orchestrator |  "osd_lvm_uuid": "bc85b6b7-69fe-55db-81a6-3a78775dfc6c" 2026-03-17 00:45:00.102628 | orchestrator |  } 2026-03-17 00:45:00.102639 | orchestrator |  } 2026-03-17 00:45:00.102650 | orchestrator | } 2026-03-17 00:45:00.102661 | orchestrator | 2026-03-17 00:45:00.102672 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-17 00:45:00.102682 | orchestrator | Tuesday 17 March 2026 00:44:58 +0000 (0:00:00.141) 0:00:34.690 ********* 2026-03-17 00:45:00.102693 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:00.102704 | orchestrator | 2026-03-17 00:45:00.102714 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-17 00:45:00.102725 | orchestrator | Tuesday 17 March 2026 00:44:58 +0000 (0:00:00.251) 0:00:34.941 ********* 2026-03-17 00:45:00.102735 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:00.102757 | orchestrator | 2026-03-17 00:45:00.102767 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-17 00:45:00.102778 | orchestrator | Tuesday 17 March 2026 00:44:58 +0000 (0:00:00.102) 0:00:35.043 ********* 2026-03-17 00:45:00.102788 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:00.102799 | orchestrator | 2026-03-17 00:45:00.102809 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-17 00:45:00.102820 | orchestrator | Tuesday 17 March 2026 00:44:59 +0000 (0:00:00.119) 0:00:35.163 ********* 2026-03-17 00:45:00.102830 | orchestrator | changed: [testbed-node-5] => { 2026-03-17 00:45:00.102841 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-17 00:45:00.102852 | orchestrator |  "ceph_osd_devices": { 2026-03-17 00:45:00.102862 | orchestrator |  "sdb": { 2026-03-17 00:45:00.102873 | orchestrator |  "osd_lvm_uuid": "6d2c3af9-2510-58af-8cf3-0edda6a2b7a0" 2026-03-17 00:45:00.102885 | orchestrator |  }, 2026-03-17 00:45:00.102902 | orchestrator |  "sdc": { 2026-03-17 00:45:00.102920 | orchestrator |  "osd_lvm_uuid": "bc85b6b7-69fe-55db-81a6-3a78775dfc6c" 2026-03-17 00:45:00.102935 | orchestrator |  } 2026-03-17 00:45:00.102946 | orchestrator |  }, 2026-03-17 00:45:00.102957 | orchestrator |  "lvm_volumes": [ 2026-03-17 00:45:00.102968 | orchestrator |  { 2026-03-17 00:45:00.103075 | orchestrator |  "data": "osd-block-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0", 2026-03-17 00:45:00.103088 | orchestrator |  "data_vg": "ceph-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0" 2026-03-17 00:45:00.103099 | orchestrator |  }, 2026-03-17 00:45:00.103110 | orchestrator |  { 2026-03-17 00:45:00.103121 | orchestrator |  "data": "osd-block-bc85b6b7-69fe-55db-81a6-3a78775dfc6c", 2026-03-17 00:45:00.103140 | orchestrator |  "data_vg": "ceph-bc85b6b7-69fe-55db-81a6-3a78775dfc6c" 2026-03-17 00:45:00.103151 | orchestrator |  } 2026-03-17 00:45:00.103162 | orchestrator |  ] 2026-03-17 00:45:00.103177 | orchestrator |  } 2026-03-17 00:45:00.103188 | orchestrator | } 2026-03-17 00:45:00.103199 | orchestrator | 2026-03-17 00:45:00.103211 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-17 00:45:00.103229 | orchestrator | Tuesday 17 March 2026 00:44:59 +0000 (0:00:00.183) 0:00:35.346 ********* 2026-03-17 00:45:00.103246 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-17 00:45:00.103280 | orchestrator | 2026-03-17 00:45:00.103296 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:45:00.103312 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-17 00:45:00.103330 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-17 00:45:00.103347 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-17 00:45:00.103364 | orchestrator | 2026-03-17 00:45:00.103400 | orchestrator | 2026-03-17 00:45:00.103417 | orchestrator | 2026-03-17 00:45:00.103434 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:45:00.103454 | orchestrator | Tuesday 17 March 2026 00:45:00 +0000 (0:00:00.813) 0:00:36.160 ********* 2026-03-17 00:45:00.103475 | orchestrator | =============================================================================== 2026-03-17 00:45:00.103494 | orchestrator | Write configuration file ------------------------------------------------ 3.46s 2026-03-17 00:45:00.103515 | orchestrator | Add known links to the list of available block devices ------------------ 1.03s 2026-03-17 00:45:00.103534 | orchestrator | Add known partitions to the list of available block devices ------------- 0.99s 2026-03-17 00:45:00.103545 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.96s 2026-03-17 00:45:00.103591 | orchestrator | Add known partitions to the list of available block devices ------------- 0.87s 2026-03-17 00:45:00.103603 | orchestrator | Print configuration data ------------------------------------------------ 0.80s 2026-03-17 00:45:00.103614 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2026-03-17 00:45:00.103625 | orchestrator | Get initial list of available block devices ----------------------------- 0.66s 2026-03-17 00:45:00.103635 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2026-03-17 00:45:00.103646 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2026-03-17 00:45:00.103656 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2026-03-17 00:45:00.103667 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.60s 2026-03-17 00:45:00.103678 | orchestrator | Add known partitions to the list of available block devices ------------- 0.59s 2026-03-17 00:45:00.103700 | orchestrator | Add known partitions to the list of available block devices ------------- 0.53s 2026-03-17 00:45:00.409452 | orchestrator | Add known partitions to the list of available block devices ------------- 0.51s 2026-03-17 00:45:00.409530 | orchestrator | Add known links to the list of available block devices ------------------ 0.51s 2026-03-17 00:45:00.409544 | orchestrator | Add known partitions to the list of available block devices ------------- 0.50s 2026-03-17 00:45:00.409549 | orchestrator | Print WAL devices ------------------------------------------------------- 0.50s 2026-03-17 00:45:00.409555 | orchestrator | Set DB devices config data ---------------------------------------------- 0.48s 2026-03-17 00:45:00.409560 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.47s 2026-03-17 00:45:22.934758 | orchestrator | 2026-03-17 00:45:22 | INFO  | Task 2e08b789-77ce-41f1-8352-8e76fb87ec42 (sync inventory) is running in background. Output coming soon. 2026-03-17 00:45:47.681429 | orchestrator | 2026-03-17 00:45:24 | INFO  | Starting group_vars file reorganization 2026-03-17 00:45:47.681525 | orchestrator | 2026-03-17 00:45:24 | INFO  | Moved 0 file(s) to their respective directories 2026-03-17 00:45:47.681539 | orchestrator | 2026-03-17 00:45:24 | INFO  | Group_vars file reorganization completed 2026-03-17 00:45:47.681549 | orchestrator | 2026-03-17 00:45:27 | INFO  | Starting variable preparation from inventory 2026-03-17 00:45:47.681559 | orchestrator | 2026-03-17 00:45:29 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-17 00:45:47.681568 | orchestrator | 2026-03-17 00:45:29 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-17 00:45:47.681577 | orchestrator | 2026-03-17 00:45:29 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-17 00:45:47.681586 | orchestrator | 2026-03-17 00:45:29 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-17 00:45:47.681595 | orchestrator | 2026-03-17 00:45:29 | INFO  | Variable preparation completed 2026-03-17 00:45:47.681604 | orchestrator | 2026-03-17 00:45:31 | INFO  | Starting inventory overwrite handling 2026-03-17 00:45:47.681613 | orchestrator | 2026-03-17 00:45:31 | INFO  | Handling group overwrites in 99-overwrite 2026-03-17 00:45:47.681622 | orchestrator | 2026-03-17 00:45:31 | INFO  | Removing group frr:children from 60-generic 2026-03-17 00:45:47.681631 | orchestrator | 2026-03-17 00:45:31 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-17 00:45:47.681659 | orchestrator | 2026-03-17 00:45:31 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-17 00:45:47.681669 | orchestrator | 2026-03-17 00:45:31 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-17 00:45:47.681678 | orchestrator | 2026-03-17 00:45:31 | INFO  | Handling group overwrites in 20-roles 2026-03-17 00:45:47.681687 | orchestrator | 2026-03-17 00:45:31 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-17 00:45:47.681714 | orchestrator | 2026-03-17 00:45:31 | INFO  | Removed 5 group(s) in total 2026-03-17 00:45:47.681723 | orchestrator | 2026-03-17 00:45:31 | INFO  | Inventory overwrite handling completed 2026-03-17 00:45:47.681732 | orchestrator | 2026-03-17 00:45:32 | INFO  | Starting merge of inventory files 2026-03-17 00:45:47.681741 | orchestrator | 2026-03-17 00:45:32 | INFO  | Inventory files merged successfully 2026-03-17 00:45:47.681749 | orchestrator | 2026-03-17 00:45:36 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-17 00:45:47.681758 | orchestrator | 2026-03-17 00:45:46 | INFO  | Successfully wrote ClusterShell configuration 2026-03-17 00:45:47.681767 | orchestrator | [master 3eafd59] 2026-03-17-00-45 2026-03-17 00:45:47.681777 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-03-17 00:45:49.723623 | orchestrator | 2026-03-17 00:45:49 | INFO  | Task 64d478b3-5032-49fb-ad0f-cdae993e4a31 (ceph-create-lvm-devices) was prepared for execution. 2026-03-17 00:45:49.723747 | orchestrator | 2026-03-17 00:45:49 | INFO  | It takes a moment until task 64d478b3-5032-49fb-ad0f-cdae993e4a31 (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-17 00:46:00.287227 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-17 00:46:00.287359 | orchestrator | 2.16.14 2026-03-17 00:46:00.287385 | orchestrator | 2026-03-17 00:46:00.287407 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-17 00:46:00.287431 | orchestrator | 2026-03-17 00:46:00.287453 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-17 00:46:00.287475 | orchestrator | Tuesday 17 March 2026 00:45:53 +0000 (0:00:00.272) 0:00:00.272 ********* 2026-03-17 00:46:00.287498 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-17 00:46:00.287519 | orchestrator | 2026-03-17 00:46:00.287542 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-17 00:46:00.287564 | orchestrator | Tuesday 17 March 2026 00:45:54 +0000 (0:00:00.215) 0:00:00.488 ********* 2026-03-17 00:46:00.287587 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:46:00.287609 | orchestrator | 2026-03-17 00:46:00.287632 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:00.287655 | orchestrator | Tuesday 17 March 2026 00:45:54 +0000 (0:00:00.194) 0:00:00.683 ********* 2026-03-17 00:46:00.287678 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-17 00:46:00.287699 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-17 00:46:00.287721 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-17 00:46:00.287742 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-17 00:46:00.287765 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-17 00:46:00.287788 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-17 00:46:00.287811 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-17 00:46:00.287834 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-17 00:46:00.287856 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-17 00:46:00.287876 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-17 00:46:00.287896 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-17 00:46:00.287918 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-17 00:46:00.287980 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-17 00:46:00.288035 | orchestrator | 2026-03-17 00:46:00.288054 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:00.288073 | orchestrator | Tuesday 17 March 2026 00:45:54 +0000 (0:00:00.470) 0:00:01.153 ********* 2026-03-17 00:46:00.288092 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:00.288139 | orchestrator | 2026-03-17 00:46:00.288159 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:00.288174 | orchestrator | Tuesday 17 March 2026 00:45:54 +0000 (0:00:00.162) 0:00:01.315 ********* 2026-03-17 00:46:00.288184 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:00.288195 | orchestrator | 2026-03-17 00:46:00.288206 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:00.288217 | orchestrator | Tuesday 17 March 2026 00:45:55 +0000 (0:00:00.181) 0:00:01.497 ********* 2026-03-17 00:46:00.288228 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:00.288238 | orchestrator | 2026-03-17 00:46:00.288249 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:00.288260 | orchestrator | Tuesday 17 March 2026 00:45:55 +0000 (0:00:00.170) 0:00:01.667 ********* 2026-03-17 00:46:00.288271 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:00.288282 | orchestrator | 2026-03-17 00:46:00.288293 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:00.288303 | orchestrator | Tuesday 17 March 2026 00:45:55 +0000 (0:00:00.166) 0:00:01.833 ********* 2026-03-17 00:46:00.288314 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:00.288325 | orchestrator | 2026-03-17 00:46:00.288335 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:00.288346 | orchestrator | Tuesday 17 March 2026 00:45:55 +0000 (0:00:00.188) 0:00:02.022 ********* 2026-03-17 00:46:00.288357 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:00.288368 | orchestrator | 2026-03-17 00:46:00.288378 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:00.288389 | orchestrator | Tuesday 17 March 2026 00:45:55 +0000 (0:00:00.175) 0:00:02.198 ********* 2026-03-17 00:46:00.288400 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:00.288410 | orchestrator | 2026-03-17 00:46:00.288421 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:00.288431 | orchestrator | Tuesday 17 March 2026 00:45:55 +0000 (0:00:00.171) 0:00:02.369 ********* 2026-03-17 00:46:00.288442 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:00.288453 | orchestrator | 2026-03-17 00:46:00.288464 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:00.288475 | orchestrator | Tuesday 17 March 2026 00:45:56 +0000 (0:00:00.172) 0:00:02.541 ********* 2026-03-17 00:46:00.288485 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069) 2026-03-17 00:46:00.288498 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069) 2026-03-17 00:46:00.288508 | orchestrator | 2026-03-17 00:46:00.288519 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:00.288554 | orchestrator | Tuesday 17 March 2026 00:45:56 +0000 (0:00:00.382) 0:00:02.924 ********* 2026-03-17 00:46:00.288566 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e46b8678-1baa-4ba8-a612-904460f97320) 2026-03-17 00:46:00.288577 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e46b8678-1baa-4ba8-a612-904460f97320) 2026-03-17 00:46:00.288587 | orchestrator | 2026-03-17 00:46:00.288598 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:00.288609 | orchestrator | Tuesday 17 March 2026 00:45:57 +0000 (0:00:00.523) 0:00:03.448 ********* 2026-03-17 00:46:00.288620 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f95d5766-a3db-4d15-9977-785c02a190f5) 2026-03-17 00:46:00.288629 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f95d5766-a3db-4d15-9977-785c02a190f5) 2026-03-17 00:46:00.288649 | orchestrator | 2026-03-17 00:46:00.288658 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:00.288668 | orchestrator | Tuesday 17 March 2026 00:45:57 +0000 (0:00:00.539) 0:00:03.988 ********* 2026-03-17 00:46:00.288677 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2854fd14-3e82-4dcb-865e-ef6e028a2c86) 2026-03-17 00:46:00.288687 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2854fd14-3e82-4dcb-865e-ef6e028a2c86) 2026-03-17 00:46:00.288696 | orchestrator | 2026-03-17 00:46:00.288705 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:00.288715 | orchestrator | Tuesday 17 March 2026 00:45:58 +0000 (0:00:00.786) 0:00:04.775 ********* 2026-03-17 00:46:00.288724 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-17 00:46:00.288734 | orchestrator | 2026-03-17 00:46:00.288743 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:00.288753 | orchestrator | Tuesday 17 March 2026 00:45:58 +0000 (0:00:00.275) 0:00:05.050 ********* 2026-03-17 00:46:00.288762 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-17 00:46:00.288772 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-17 00:46:00.288781 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-17 00:46:00.288790 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-17 00:46:00.288800 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-17 00:46:00.288809 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-17 00:46:00.288818 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-17 00:46:00.288828 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-17 00:46:00.288837 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-17 00:46:00.288846 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-17 00:46:00.288856 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-17 00:46:00.288888 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-17 00:46:00.288898 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-17 00:46:00.288907 | orchestrator | 2026-03-17 00:46:00.288917 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:00.288926 | orchestrator | Tuesday 17 March 2026 00:45:58 +0000 (0:00:00.361) 0:00:05.411 ********* 2026-03-17 00:46:00.288936 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:00.288945 | orchestrator | 2026-03-17 00:46:00.288955 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:00.288964 | orchestrator | Tuesday 17 March 2026 00:45:59 +0000 (0:00:00.186) 0:00:05.598 ********* 2026-03-17 00:46:00.288974 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:00.288983 | orchestrator | 2026-03-17 00:46:00.288993 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:00.289003 | orchestrator | Tuesday 17 March 2026 00:45:59 +0000 (0:00:00.180) 0:00:05.778 ********* 2026-03-17 00:46:00.289012 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:00.289022 | orchestrator | 2026-03-17 00:46:00.289031 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:00.289041 | orchestrator | Tuesday 17 March 2026 00:45:59 +0000 (0:00:00.179) 0:00:05.957 ********* 2026-03-17 00:46:00.289050 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:00.289066 | orchestrator | 2026-03-17 00:46:00.289076 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:00.289085 | orchestrator | Tuesday 17 March 2026 00:45:59 +0000 (0:00:00.163) 0:00:06.120 ********* 2026-03-17 00:46:00.289095 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:00.289148 | orchestrator | 2026-03-17 00:46:00.289158 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:00.289168 | orchestrator | Tuesday 17 March 2026 00:45:59 +0000 (0:00:00.172) 0:00:06.293 ********* 2026-03-17 00:46:00.289178 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:00.289187 | orchestrator | 2026-03-17 00:46:00.289196 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:00.289206 | orchestrator | Tuesday 17 March 2026 00:46:00 +0000 (0:00:00.180) 0:00:06.473 ********* 2026-03-17 00:46:00.289216 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:00.289225 | orchestrator | 2026-03-17 00:46:00.289240 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:07.675171 | orchestrator | Tuesday 17 March 2026 00:46:00 +0000 (0:00:00.229) 0:00:06.703 ********* 2026-03-17 00:46:07.675284 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:07.675309 | orchestrator | 2026-03-17 00:46:07.675330 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:07.675349 | orchestrator | Tuesday 17 March 2026 00:46:00 +0000 (0:00:00.181) 0:00:06.884 ********* 2026-03-17 00:46:07.675368 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-17 00:46:07.675386 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-17 00:46:07.675405 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-17 00:46:07.675425 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-17 00:46:07.675443 | orchestrator | 2026-03-17 00:46:07.675461 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:07.675480 | orchestrator | Tuesday 17 March 2026 00:46:01 +0000 (0:00:00.840) 0:00:07.725 ********* 2026-03-17 00:46:07.675491 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:07.675502 | orchestrator | 2026-03-17 00:46:07.675513 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:07.675524 | orchestrator | Tuesday 17 March 2026 00:46:01 +0000 (0:00:00.197) 0:00:07.923 ********* 2026-03-17 00:46:07.675535 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:07.675546 | orchestrator | 2026-03-17 00:46:07.675557 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:07.675568 | orchestrator | Tuesday 17 March 2026 00:46:01 +0000 (0:00:00.199) 0:00:08.123 ********* 2026-03-17 00:46:07.675579 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:07.675590 | orchestrator | 2026-03-17 00:46:07.675601 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:07.675612 | orchestrator | Tuesday 17 March 2026 00:46:01 +0000 (0:00:00.180) 0:00:08.304 ********* 2026-03-17 00:46:07.675623 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:07.675633 | orchestrator | 2026-03-17 00:46:07.675644 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-17 00:46:07.675658 | orchestrator | Tuesday 17 March 2026 00:46:02 +0000 (0:00:00.184) 0:00:08.488 ********* 2026-03-17 00:46:07.675670 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:07.675682 | orchestrator | 2026-03-17 00:46:07.675695 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-17 00:46:07.675708 | orchestrator | Tuesday 17 March 2026 00:46:02 +0000 (0:00:00.127) 0:00:08.616 ********* 2026-03-17 00:46:07.675721 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b48309d9-c226-530e-bc23-6e205cf9651b'}}) 2026-03-17 00:46:07.675734 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f'}}) 2026-03-17 00:46:07.675746 | orchestrator | 2026-03-17 00:46:07.675759 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-17 00:46:07.675797 | orchestrator | Tuesday 17 March 2026 00:46:02 +0000 (0:00:00.213) 0:00:08.829 ********* 2026-03-17 00:46:07.675811 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b48309d9-c226-530e-bc23-6e205cf9651b', 'data_vg': 'ceph-b48309d9-c226-530e-bc23-6e205cf9651b'}) 2026-03-17 00:46:07.675825 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f', 'data_vg': 'ceph-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f'}) 2026-03-17 00:46:07.675837 | orchestrator | 2026-03-17 00:46:07.675850 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-17 00:46:07.675877 | orchestrator | Tuesday 17 March 2026 00:46:04 +0000 (0:00:01.902) 0:00:10.732 ********* 2026-03-17 00:46:07.675891 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b48309d9-c226-530e-bc23-6e205cf9651b', 'data_vg': 'ceph-b48309d9-c226-530e-bc23-6e205cf9651b'})  2026-03-17 00:46:07.675905 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f', 'data_vg': 'ceph-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f'})  2026-03-17 00:46:07.675917 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:07.675929 | orchestrator | 2026-03-17 00:46:07.675941 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-17 00:46:07.675954 | orchestrator | Tuesday 17 March 2026 00:46:04 +0000 (0:00:00.137) 0:00:10.869 ********* 2026-03-17 00:46:07.675966 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b48309d9-c226-530e-bc23-6e205cf9651b', 'data_vg': 'ceph-b48309d9-c226-530e-bc23-6e205cf9651b'}) 2026-03-17 00:46:07.675979 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f', 'data_vg': 'ceph-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f'}) 2026-03-17 00:46:07.675992 | orchestrator | 2026-03-17 00:46:07.676005 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-17 00:46:07.676018 | orchestrator | Tuesday 17 March 2026 00:46:05 +0000 (0:00:01.450) 0:00:12.319 ********* 2026-03-17 00:46:07.676028 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b48309d9-c226-530e-bc23-6e205cf9651b', 'data_vg': 'ceph-b48309d9-c226-530e-bc23-6e205cf9651b'})  2026-03-17 00:46:07.676040 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f', 'data_vg': 'ceph-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f'})  2026-03-17 00:46:07.676051 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:07.676061 | orchestrator | 2026-03-17 00:46:07.676072 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-17 00:46:07.676083 | orchestrator | Tuesday 17 March 2026 00:46:06 +0000 (0:00:00.156) 0:00:12.475 ********* 2026-03-17 00:46:07.676114 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:07.676182 | orchestrator | 2026-03-17 00:46:07.676196 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-17 00:46:07.676206 | orchestrator | Tuesday 17 March 2026 00:46:06 +0000 (0:00:00.122) 0:00:12.598 ********* 2026-03-17 00:46:07.676217 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b48309d9-c226-530e-bc23-6e205cf9651b', 'data_vg': 'ceph-b48309d9-c226-530e-bc23-6e205cf9651b'})  2026-03-17 00:46:07.676228 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f', 'data_vg': 'ceph-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f'})  2026-03-17 00:46:07.676239 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:07.676250 | orchestrator | 2026-03-17 00:46:07.676261 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-17 00:46:07.676271 | orchestrator | Tuesday 17 March 2026 00:46:06 +0000 (0:00:00.272) 0:00:12.870 ********* 2026-03-17 00:46:07.676282 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:07.676293 | orchestrator | 2026-03-17 00:46:07.676303 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-17 00:46:07.676314 | orchestrator | Tuesday 17 March 2026 00:46:06 +0000 (0:00:00.117) 0:00:12.988 ********* 2026-03-17 00:46:07.676334 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b48309d9-c226-530e-bc23-6e205cf9651b', 'data_vg': 'ceph-b48309d9-c226-530e-bc23-6e205cf9651b'})  2026-03-17 00:46:07.676345 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f', 'data_vg': 'ceph-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f'})  2026-03-17 00:46:07.676355 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:07.676366 | orchestrator | 2026-03-17 00:46:07.676376 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-17 00:46:07.676387 | orchestrator | Tuesday 17 March 2026 00:46:06 +0000 (0:00:00.135) 0:00:13.124 ********* 2026-03-17 00:46:07.676398 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:07.676408 | orchestrator | 2026-03-17 00:46:07.676419 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-17 00:46:07.676430 | orchestrator | Tuesday 17 March 2026 00:46:06 +0000 (0:00:00.139) 0:00:13.263 ********* 2026-03-17 00:46:07.676440 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b48309d9-c226-530e-bc23-6e205cf9651b', 'data_vg': 'ceph-b48309d9-c226-530e-bc23-6e205cf9651b'})  2026-03-17 00:46:07.676451 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f', 'data_vg': 'ceph-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f'})  2026-03-17 00:46:07.676462 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:07.676473 | orchestrator | 2026-03-17 00:46:07.676483 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-17 00:46:07.676494 | orchestrator | Tuesday 17 March 2026 00:46:06 +0000 (0:00:00.135) 0:00:13.399 ********* 2026-03-17 00:46:07.676505 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:46:07.676516 | orchestrator | 2026-03-17 00:46:07.676526 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-17 00:46:07.676537 | orchestrator | Tuesday 17 March 2026 00:46:07 +0000 (0:00:00.138) 0:00:13.538 ********* 2026-03-17 00:46:07.676548 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b48309d9-c226-530e-bc23-6e205cf9651b', 'data_vg': 'ceph-b48309d9-c226-530e-bc23-6e205cf9651b'})  2026-03-17 00:46:07.676559 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f', 'data_vg': 'ceph-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f'})  2026-03-17 00:46:07.676570 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:07.676580 | orchestrator | 2026-03-17 00:46:07.676591 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-17 00:46:07.676602 | orchestrator | Tuesday 17 March 2026 00:46:07 +0000 (0:00:00.152) 0:00:13.690 ********* 2026-03-17 00:46:07.676613 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b48309d9-c226-530e-bc23-6e205cf9651b', 'data_vg': 'ceph-b48309d9-c226-530e-bc23-6e205cf9651b'})  2026-03-17 00:46:07.676631 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f', 'data_vg': 'ceph-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f'})  2026-03-17 00:46:07.676642 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:07.676653 | orchestrator | 2026-03-17 00:46:07.676664 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-17 00:46:07.676675 | orchestrator | Tuesday 17 March 2026 00:46:07 +0000 (0:00:00.132) 0:00:13.822 ********* 2026-03-17 00:46:07.676685 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b48309d9-c226-530e-bc23-6e205cf9651b', 'data_vg': 'ceph-b48309d9-c226-530e-bc23-6e205cf9651b'})  2026-03-17 00:46:07.676696 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f', 'data_vg': 'ceph-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f'})  2026-03-17 00:46:07.676707 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:07.676717 | orchestrator | 2026-03-17 00:46:07.676728 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-17 00:46:07.676739 | orchestrator | Tuesday 17 March 2026 00:46:07 +0000 (0:00:00.130) 0:00:13.953 ********* 2026-03-17 00:46:07.676756 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:07.676767 | orchestrator | 2026-03-17 00:46:07.676778 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-17 00:46:07.676797 | orchestrator | Tuesday 17 March 2026 00:46:07 +0000 (0:00:00.136) 0:00:14.090 ********* 2026-03-17 00:46:13.383850 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:13.383942 | orchestrator | 2026-03-17 00:46:13.383960 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-17 00:46:13.383969 | orchestrator | Tuesday 17 March 2026 00:46:07 +0000 (0:00:00.121) 0:00:14.211 ********* 2026-03-17 00:46:13.383975 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:13.383981 | orchestrator | 2026-03-17 00:46:13.383987 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-17 00:46:13.383994 | orchestrator | Tuesday 17 March 2026 00:46:07 +0000 (0:00:00.100) 0:00:14.312 ********* 2026-03-17 00:46:13.384006 | orchestrator | ok: [testbed-node-3] => { 2026-03-17 00:46:13.384013 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-17 00:46:13.384019 | orchestrator | } 2026-03-17 00:46:13.384025 | orchestrator | 2026-03-17 00:46:13.384031 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-17 00:46:13.384037 | orchestrator | Tuesday 17 March 2026 00:46:08 +0000 (0:00:00.240) 0:00:14.552 ********* 2026-03-17 00:46:13.384043 | orchestrator | ok: [testbed-node-3] => { 2026-03-17 00:46:13.384049 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-17 00:46:13.384054 | orchestrator | } 2026-03-17 00:46:13.384060 | orchestrator | 2026-03-17 00:46:13.384066 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-17 00:46:13.384072 | orchestrator | Tuesday 17 March 2026 00:46:08 +0000 (0:00:00.135) 0:00:14.688 ********* 2026-03-17 00:46:13.384077 | orchestrator | ok: [testbed-node-3] => { 2026-03-17 00:46:13.384084 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-17 00:46:13.384091 | orchestrator | } 2026-03-17 00:46:13.384097 | orchestrator | 2026-03-17 00:46:13.384102 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-17 00:46:13.384108 | orchestrator | Tuesday 17 March 2026 00:46:08 +0000 (0:00:00.138) 0:00:14.826 ********* 2026-03-17 00:46:13.384114 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:46:13.384120 | orchestrator | 2026-03-17 00:46:13.384126 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-17 00:46:13.384131 | orchestrator | Tuesday 17 March 2026 00:46:09 +0000 (0:00:00.623) 0:00:15.449 ********* 2026-03-17 00:46:13.384137 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:46:13.384157 | orchestrator | 2026-03-17 00:46:13.384162 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-17 00:46:13.384168 | orchestrator | Tuesday 17 March 2026 00:46:09 +0000 (0:00:00.515) 0:00:15.965 ********* 2026-03-17 00:46:13.384174 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:46:13.384180 | orchestrator | 2026-03-17 00:46:13.384186 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-17 00:46:13.384191 | orchestrator | Tuesday 17 March 2026 00:46:10 +0000 (0:00:00.491) 0:00:16.456 ********* 2026-03-17 00:46:13.384197 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:46:13.384203 | orchestrator | 2026-03-17 00:46:13.384209 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-17 00:46:13.384215 | orchestrator | Tuesday 17 March 2026 00:46:10 +0000 (0:00:00.120) 0:00:16.576 ********* 2026-03-17 00:46:13.384220 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:13.384226 | orchestrator | 2026-03-17 00:46:13.384232 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-17 00:46:13.384238 | orchestrator | Tuesday 17 March 2026 00:46:10 +0000 (0:00:00.095) 0:00:16.672 ********* 2026-03-17 00:46:13.384243 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:13.384249 | orchestrator | 2026-03-17 00:46:13.384255 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-17 00:46:13.384279 | orchestrator | Tuesday 17 March 2026 00:46:10 +0000 (0:00:00.114) 0:00:16.786 ********* 2026-03-17 00:46:13.384296 | orchestrator | ok: [testbed-node-3] => { 2026-03-17 00:46:13.384303 | orchestrator |  "vgs_report": { 2026-03-17 00:46:13.384308 | orchestrator |  "vg": [] 2026-03-17 00:46:13.384314 | orchestrator |  } 2026-03-17 00:46:13.384320 | orchestrator | } 2026-03-17 00:46:13.384326 | orchestrator | 2026-03-17 00:46:13.384332 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-17 00:46:13.384337 | orchestrator | Tuesday 17 March 2026 00:46:10 +0000 (0:00:00.119) 0:00:16.906 ********* 2026-03-17 00:46:13.384350 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:13.384356 | orchestrator | 2026-03-17 00:46:13.384362 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-17 00:46:13.384375 | orchestrator | Tuesday 17 March 2026 00:46:10 +0000 (0:00:00.136) 0:00:17.043 ********* 2026-03-17 00:46:13.384381 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:13.384386 | orchestrator | 2026-03-17 00:46:13.384392 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-17 00:46:13.384398 | orchestrator | Tuesday 17 March 2026 00:46:10 +0000 (0:00:00.118) 0:00:17.162 ********* 2026-03-17 00:46:13.384403 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:13.384409 | orchestrator | 2026-03-17 00:46:13.384416 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-17 00:46:13.384423 | orchestrator | Tuesday 17 March 2026 00:46:10 +0000 (0:00:00.257) 0:00:17.420 ********* 2026-03-17 00:46:13.384430 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:13.384436 | orchestrator | 2026-03-17 00:46:13.384442 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-17 00:46:13.384449 | orchestrator | Tuesday 17 March 2026 00:46:11 +0000 (0:00:00.138) 0:00:17.558 ********* 2026-03-17 00:46:13.384455 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:13.384462 | orchestrator | 2026-03-17 00:46:13.384468 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-17 00:46:13.384474 | orchestrator | Tuesday 17 March 2026 00:46:11 +0000 (0:00:00.143) 0:00:17.702 ********* 2026-03-17 00:46:13.384481 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:13.384487 | orchestrator | 2026-03-17 00:46:13.384494 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-17 00:46:13.384500 | orchestrator | Tuesday 17 March 2026 00:46:11 +0000 (0:00:00.123) 0:00:17.825 ********* 2026-03-17 00:46:13.384507 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:13.384513 | orchestrator | 2026-03-17 00:46:13.384519 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-17 00:46:13.384526 | orchestrator | Tuesday 17 March 2026 00:46:11 +0000 (0:00:00.127) 0:00:17.953 ********* 2026-03-17 00:46:13.384544 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:13.384551 | orchestrator | 2026-03-17 00:46:13.384557 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-17 00:46:13.384564 | orchestrator | Tuesday 17 March 2026 00:46:11 +0000 (0:00:00.117) 0:00:18.070 ********* 2026-03-17 00:46:13.384570 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:13.384577 | orchestrator | 2026-03-17 00:46:13.384582 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-17 00:46:13.384588 | orchestrator | Tuesday 17 March 2026 00:46:11 +0000 (0:00:00.132) 0:00:18.202 ********* 2026-03-17 00:46:13.384594 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:13.384600 | orchestrator | 2026-03-17 00:46:13.384605 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-17 00:46:13.384611 | orchestrator | Tuesday 17 March 2026 00:46:11 +0000 (0:00:00.136) 0:00:18.339 ********* 2026-03-17 00:46:13.384617 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:13.384622 | orchestrator | 2026-03-17 00:46:13.384628 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-17 00:46:13.384634 | orchestrator | Tuesday 17 March 2026 00:46:12 +0000 (0:00:00.129) 0:00:18.469 ********* 2026-03-17 00:46:13.384644 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:13.384650 | orchestrator | 2026-03-17 00:46:13.384656 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-17 00:46:13.384662 | orchestrator | Tuesday 17 March 2026 00:46:12 +0000 (0:00:00.131) 0:00:18.600 ********* 2026-03-17 00:46:13.384668 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:13.384673 | orchestrator | 2026-03-17 00:46:13.384679 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-17 00:46:13.384685 | orchestrator | Tuesday 17 March 2026 00:46:12 +0000 (0:00:00.137) 0:00:18.737 ********* 2026-03-17 00:46:13.384691 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:13.384696 | orchestrator | 2026-03-17 00:46:13.384702 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-17 00:46:13.384708 | orchestrator | Tuesday 17 March 2026 00:46:12 +0000 (0:00:00.104) 0:00:18.841 ********* 2026-03-17 00:46:13.384715 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b48309d9-c226-530e-bc23-6e205cf9651b', 'data_vg': 'ceph-b48309d9-c226-530e-bc23-6e205cf9651b'})  2026-03-17 00:46:13.384722 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f', 'data_vg': 'ceph-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f'})  2026-03-17 00:46:13.384728 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:13.384734 | orchestrator | 2026-03-17 00:46:13.384740 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-17 00:46:13.384746 | orchestrator | Tuesday 17 March 2026 00:46:12 +0000 (0:00:00.303) 0:00:19.145 ********* 2026-03-17 00:46:13.384752 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b48309d9-c226-530e-bc23-6e205cf9651b', 'data_vg': 'ceph-b48309d9-c226-530e-bc23-6e205cf9651b'})  2026-03-17 00:46:13.384757 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f', 'data_vg': 'ceph-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f'})  2026-03-17 00:46:13.384763 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:13.384769 | orchestrator | 2026-03-17 00:46:13.384775 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-17 00:46:13.384781 | orchestrator | Tuesday 17 March 2026 00:46:12 +0000 (0:00:00.132) 0:00:19.278 ********* 2026-03-17 00:46:13.384787 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b48309d9-c226-530e-bc23-6e205cf9651b', 'data_vg': 'ceph-b48309d9-c226-530e-bc23-6e205cf9651b'})  2026-03-17 00:46:13.384793 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f', 'data_vg': 'ceph-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f'})  2026-03-17 00:46:13.384799 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:13.384804 | orchestrator | 2026-03-17 00:46:13.384810 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-17 00:46:13.384816 | orchestrator | Tuesday 17 March 2026 00:46:12 +0000 (0:00:00.134) 0:00:19.412 ********* 2026-03-17 00:46:13.384822 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b48309d9-c226-530e-bc23-6e205cf9651b', 'data_vg': 'ceph-b48309d9-c226-530e-bc23-6e205cf9651b'})  2026-03-17 00:46:13.384828 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f', 'data_vg': 'ceph-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f'})  2026-03-17 00:46:13.384834 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:13.384839 | orchestrator | 2026-03-17 00:46:13.384845 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-17 00:46:13.384851 | orchestrator | Tuesday 17 March 2026 00:46:13 +0000 (0:00:00.117) 0:00:19.529 ********* 2026-03-17 00:46:13.384857 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b48309d9-c226-530e-bc23-6e205cf9651b', 'data_vg': 'ceph-b48309d9-c226-530e-bc23-6e205cf9651b'})  2026-03-17 00:46:13.384862 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f', 'data_vg': 'ceph-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f'})  2026-03-17 00:46:13.384872 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:13.384878 | orchestrator | 2026-03-17 00:46:13.384884 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-17 00:46:13.384889 | orchestrator | Tuesday 17 March 2026 00:46:13 +0000 (0:00:00.128) 0:00:19.657 ********* 2026-03-17 00:46:13.384899 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b48309d9-c226-530e-bc23-6e205cf9651b', 'data_vg': 'ceph-b48309d9-c226-530e-bc23-6e205cf9651b'})  2026-03-17 00:46:18.387759 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f', 'data_vg': 'ceph-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f'})  2026-03-17 00:46:18.387856 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:18.387869 | orchestrator | 2026-03-17 00:46:18.387879 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-17 00:46:18.387888 | orchestrator | Tuesday 17 March 2026 00:46:13 +0000 (0:00:00.147) 0:00:19.805 ********* 2026-03-17 00:46:18.387897 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b48309d9-c226-530e-bc23-6e205cf9651b', 'data_vg': 'ceph-b48309d9-c226-530e-bc23-6e205cf9651b'})  2026-03-17 00:46:18.387906 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f', 'data_vg': 'ceph-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f'})  2026-03-17 00:46:18.387915 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:18.387923 | orchestrator | 2026-03-17 00:46:18.387949 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-17 00:46:18.387958 | orchestrator | Tuesday 17 March 2026 00:46:13 +0000 (0:00:00.144) 0:00:19.949 ********* 2026-03-17 00:46:18.387966 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b48309d9-c226-530e-bc23-6e205cf9651b', 'data_vg': 'ceph-b48309d9-c226-530e-bc23-6e205cf9651b'})  2026-03-17 00:46:18.387975 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f', 'data_vg': 'ceph-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f'})  2026-03-17 00:46:18.387984 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:18.387992 | orchestrator | 2026-03-17 00:46:18.388000 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-17 00:46:18.388008 | orchestrator | Tuesday 17 March 2026 00:46:13 +0000 (0:00:00.129) 0:00:20.078 ********* 2026-03-17 00:46:18.388017 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:46:18.388026 | orchestrator | 2026-03-17 00:46:18.388034 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-17 00:46:18.388042 | orchestrator | Tuesday 17 March 2026 00:46:14 +0000 (0:00:00.468) 0:00:20.547 ********* 2026-03-17 00:46:18.388051 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:46:18.388058 | orchestrator | 2026-03-17 00:46:18.388067 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-17 00:46:18.388075 | orchestrator | Tuesday 17 March 2026 00:46:14 +0000 (0:00:00.496) 0:00:21.043 ********* 2026-03-17 00:46:18.388082 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:46:18.388091 | orchestrator | 2026-03-17 00:46:18.388099 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-17 00:46:18.388107 | orchestrator | Tuesday 17 March 2026 00:46:14 +0000 (0:00:00.146) 0:00:21.189 ********* 2026-03-17 00:46:18.388115 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f', 'vg_name': 'ceph-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f'}) 2026-03-17 00:46:18.388129 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-b48309d9-c226-530e-bc23-6e205cf9651b', 'vg_name': 'ceph-b48309d9-c226-530e-bc23-6e205cf9651b'}) 2026-03-17 00:46:18.388137 | orchestrator | 2026-03-17 00:46:18.388145 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-17 00:46:18.388153 | orchestrator | Tuesday 17 March 2026 00:46:14 +0000 (0:00:00.152) 0:00:21.342 ********* 2026-03-17 00:46:18.388181 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b48309d9-c226-530e-bc23-6e205cf9651b', 'data_vg': 'ceph-b48309d9-c226-530e-bc23-6e205cf9651b'})  2026-03-17 00:46:18.388209 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f', 'data_vg': 'ceph-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f'})  2026-03-17 00:46:18.388218 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:18.388226 | orchestrator | 2026-03-17 00:46:18.388235 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-17 00:46:18.388242 | orchestrator | Tuesday 17 March 2026 00:46:15 +0000 (0:00:00.346) 0:00:21.688 ********* 2026-03-17 00:46:18.388251 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b48309d9-c226-530e-bc23-6e205cf9651b', 'data_vg': 'ceph-b48309d9-c226-530e-bc23-6e205cf9651b'})  2026-03-17 00:46:18.388258 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f', 'data_vg': 'ceph-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f'})  2026-03-17 00:46:18.388266 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:18.388275 | orchestrator | 2026-03-17 00:46:18.388283 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-17 00:46:18.388291 | orchestrator | Tuesday 17 March 2026 00:46:15 +0000 (0:00:00.153) 0:00:21.842 ********* 2026-03-17 00:46:18.388299 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b48309d9-c226-530e-bc23-6e205cf9651b', 'data_vg': 'ceph-b48309d9-c226-530e-bc23-6e205cf9651b'})  2026-03-17 00:46:18.388308 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f', 'data_vg': 'ceph-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f'})  2026-03-17 00:46:18.388316 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:18.388324 | orchestrator | 2026-03-17 00:46:18.388332 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-17 00:46:18.388340 | orchestrator | Tuesday 17 March 2026 00:46:15 +0000 (0:00:00.147) 0:00:21.990 ********* 2026-03-17 00:46:18.388364 | orchestrator | ok: [testbed-node-3] => { 2026-03-17 00:46:18.388373 | orchestrator |  "lvm_report": { 2026-03-17 00:46:18.388381 | orchestrator |  "lv": [ 2026-03-17 00:46:18.388390 | orchestrator |  { 2026-03-17 00:46:18.388399 | orchestrator |  "lv_name": "osd-block-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f", 2026-03-17 00:46:18.388408 | orchestrator |  "vg_name": "ceph-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f" 2026-03-17 00:46:18.388416 | orchestrator |  }, 2026-03-17 00:46:18.388425 | orchestrator |  { 2026-03-17 00:46:18.388433 | orchestrator |  "lv_name": "osd-block-b48309d9-c226-530e-bc23-6e205cf9651b", 2026-03-17 00:46:18.388441 | orchestrator |  "vg_name": "ceph-b48309d9-c226-530e-bc23-6e205cf9651b" 2026-03-17 00:46:18.388449 | orchestrator |  } 2026-03-17 00:46:18.388457 | orchestrator |  ], 2026-03-17 00:46:18.388466 | orchestrator |  "pv": [ 2026-03-17 00:46:18.388474 | orchestrator |  { 2026-03-17 00:46:18.388482 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-17 00:46:18.388490 | orchestrator |  "vg_name": "ceph-b48309d9-c226-530e-bc23-6e205cf9651b" 2026-03-17 00:46:18.388498 | orchestrator |  }, 2026-03-17 00:46:18.388506 | orchestrator |  { 2026-03-17 00:46:18.388515 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-17 00:46:18.388523 | orchestrator |  "vg_name": "ceph-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f" 2026-03-17 00:46:18.388531 | orchestrator |  } 2026-03-17 00:46:18.388539 | orchestrator |  ] 2026-03-17 00:46:18.388547 | orchestrator |  } 2026-03-17 00:46:18.388556 | orchestrator | } 2026-03-17 00:46:18.388564 | orchestrator | 2026-03-17 00:46:18.388573 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-17 00:46:18.388581 | orchestrator | 2026-03-17 00:46:18.388589 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-17 00:46:18.388597 | orchestrator | Tuesday 17 March 2026 00:46:15 +0000 (0:00:00.311) 0:00:22.302 ********* 2026-03-17 00:46:18.388611 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-17 00:46:18.388620 | orchestrator | 2026-03-17 00:46:18.388628 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-17 00:46:18.388636 | orchestrator | Tuesday 17 March 2026 00:46:16 +0000 (0:00:00.236) 0:00:22.539 ********* 2026-03-17 00:46:18.388644 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:46:18.388651 | orchestrator | 2026-03-17 00:46:18.388658 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:18.388666 | orchestrator | Tuesday 17 March 2026 00:46:16 +0000 (0:00:00.242) 0:00:22.781 ********* 2026-03-17 00:46:18.388673 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-17 00:46:18.388680 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-17 00:46:18.388686 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-17 00:46:18.388693 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-17 00:46:18.388700 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-17 00:46:18.388708 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-17 00:46:18.388715 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-17 00:46:18.388727 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-17 00:46:18.388734 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-17 00:46:18.388741 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-17 00:46:18.388748 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-17 00:46:18.388756 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-17 00:46:18.388763 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-17 00:46:18.388771 | orchestrator | 2026-03-17 00:46:18.388779 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:18.388788 | orchestrator | Tuesday 17 March 2026 00:46:16 +0000 (0:00:00.396) 0:00:23.178 ********* 2026-03-17 00:46:18.388795 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:18.388803 | orchestrator | 2026-03-17 00:46:18.388811 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:18.388819 | orchestrator | Tuesday 17 March 2026 00:46:16 +0000 (0:00:00.190) 0:00:23.369 ********* 2026-03-17 00:46:18.388827 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:18.388834 | orchestrator | 2026-03-17 00:46:18.388842 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:18.388850 | orchestrator | Tuesday 17 March 2026 00:46:17 +0000 (0:00:00.207) 0:00:23.577 ********* 2026-03-17 00:46:18.388858 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:18.388866 | orchestrator | 2026-03-17 00:46:18.388874 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:18.388882 | orchestrator | Tuesday 17 March 2026 00:46:17 +0000 (0:00:00.589) 0:00:24.167 ********* 2026-03-17 00:46:18.388889 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:18.388897 | orchestrator | 2026-03-17 00:46:18.388905 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:18.388913 | orchestrator | Tuesday 17 March 2026 00:46:17 +0000 (0:00:00.196) 0:00:24.363 ********* 2026-03-17 00:46:18.388920 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:18.388928 | orchestrator | 2026-03-17 00:46:18.388937 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:18.388942 | orchestrator | Tuesday 17 March 2026 00:46:18 +0000 (0:00:00.241) 0:00:24.605 ********* 2026-03-17 00:46:18.388953 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:18.388958 | orchestrator | 2026-03-17 00:46:18.388968 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:29.784604 | orchestrator | Tuesday 17 March 2026 00:46:18 +0000 (0:00:00.199) 0:00:24.804 ********* 2026-03-17 00:46:29.784731 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:29.784757 | orchestrator | 2026-03-17 00:46:29.784780 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:29.784801 | orchestrator | Tuesday 17 March 2026 00:46:18 +0000 (0:00:00.237) 0:00:25.042 ********* 2026-03-17 00:46:29.784821 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:29.784841 | orchestrator | 2026-03-17 00:46:29.784861 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:29.784881 | orchestrator | Tuesday 17 March 2026 00:46:18 +0000 (0:00:00.219) 0:00:25.262 ********* 2026-03-17 00:46:29.784901 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35) 2026-03-17 00:46:29.784922 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35) 2026-03-17 00:46:29.784942 | orchestrator | 2026-03-17 00:46:29.784962 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:29.784982 | orchestrator | Tuesday 17 March 2026 00:46:19 +0000 (0:00:00.411) 0:00:25.674 ********* 2026-03-17 00:46:29.785002 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9ec754d5-296d-4a8a-b6d8-e4830272a171) 2026-03-17 00:46:29.785022 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9ec754d5-296d-4a8a-b6d8-e4830272a171) 2026-03-17 00:46:29.785042 | orchestrator | 2026-03-17 00:46:29.785062 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:29.785081 | orchestrator | Tuesday 17 March 2026 00:46:19 +0000 (0:00:00.437) 0:00:26.112 ********* 2026-03-17 00:46:29.785101 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d8ebe49d-b73b-4490-897b-f13bdc67f86d) 2026-03-17 00:46:29.785120 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d8ebe49d-b73b-4490-897b-f13bdc67f86d) 2026-03-17 00:46:29.785140 | orchestrator | 2026-03-17 00:46:29.785159 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:29.785179 | orchestrator | Tuesday 17 March 2026 00:46:20 +0000 (0:00:00.426) 0:00:26.538 ********* 2026-03-17 00:46:29.785243 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f91ef76e-9f0f-49ef-bc09-7b70daad6579) 2026-03-17 00:46:29.785262 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f91ef76e-9f0f-49ef-bc09-7b70daad6579) 2026-03-17 00:46:29.785281 | orchestrator | 2026-03-17 00:46:29.785300 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:29.785320 | orchestrator | Tuesday 17 March 2026 00:46:20 +0000 (0:00:00.648) 0:00:27.187 ********* 2026-03-17 00:46:29.785340 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-17 00:46:29.785360 | orchestrator | 2026-03-17 00:46:29.785379 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:29.785399 | orchestrator | Tuesday 17 March 2026 00:46:21 +0000 (0:00:00.527) 0:00:27.714 ********* 2026-03-17 00:46:29.785418 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-17 00:46:29.785439 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-17 00:46:29.785459 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-17 00:46:29.785479 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-17 00:46:29.785499 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-17 00:46:29.785518 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-17 00:46:29.785571 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-17 00:46:29.785592 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-17 00:46:29.785611 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-17 00:46:29.785631 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-17 00:46:29.785651 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-17 00:46:29.785670 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-17 00:46:29.785690 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-17 00:46:29.785708 | orchestrator | 2026-03-17 00:46:29.785726 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:29.785743 | orchestrator | Tuesday 17 March 2026 00:46:22 +0000 (0:00:00.826) 0:00:28.540 ********* 2026-03-17 00:46:29.785761 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:29.785779 | orchestrator | 2026-03-17 00:46:29.785796 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:29.785836 | orchestrator | Tuesday 17 March 2026 00:46:22 +0000 (0:00:00.191) 0:00:28.732 ********* 2026-03-17 00:46:29.785855 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:29.785872 | orchestrator | 2026-03-17 00:46:29.785890 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:29.785908 | orchestrator | Tuesday 17 March 2026 00:46:22 +0000 (0:00:00.223) 0:00:28.957 ********* 2026-03-17 00:46:29.785926 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:29.785943 | orchestrator | 2026-03-17 00:46:29.785982 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:29.786001 | orchestrator | Tuesday 17 March 2026 00:46:22 +0000 (0:00:00.194) 0:00:29.151 ********* 2026-03-17 00:46:29.786129 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:29.786152 | orchestrator | 2026-03-17 00:46:29.786169 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:29.786187 | orchestrator | Tuesday 17 March 2026 00:46:22 +0000 (0:00:00.203) 0:00:29.355 ********* 2026-03-17 00:46:29.786233 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:29.786251 | orchestrator | 2026-03-17 00:46:29.786268 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:29.786285 | orchestrator | Tuesday 17 March 2026 00:46:23 +0000 (0:00:00.194) 0:00:29.549 ********* 2026-03-17 00:46:29.786300 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:29.786315 | orchestrator | 2026-03-17 00:46:29.786332 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:29.786349 | orchestrator | Tuesday 17 March 2026 00:46:23 +0000 (0:00:00.190) 0:00:29.740 ********* 2026-03-17 00:46:29.786366 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:29.786383 | orchestrator | 2026-03-17 00:46:29.786401 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:29.786418 | orchestrator | Tuesday 17 March 2026 00:46:23 +0000 (0:00:00.201) 0:00:29.941 ********* 2026-03-17 00:46:29.786436 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:29.786453 | orchestrator | 2026-03-17 00:46:29.786470 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:29.786488 | orchestrator | Tuesday 17 March 2026 00:46:23 +0000 (0:00:00.217) 0:00:30.159 ********* 2026-03-17 00:46:29.786506 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-17 00:46:29.786523 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-17 00:46:29.786541 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-17 00:46:29.786559 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-17 00:46:29.786577 | orchestrator | 2026-03-17 00:46:29.786595 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:29.786627 | orchestrator | Tuesday 17 March 2026 00:46:24 +0000 (0:00:00.826) 0:00:30.985 ********* 2026-03-17 00:46:29.786646 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:29.786662 | orchestrator | 2026-03-17 00:46:29.786680 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:29.786698 | orchestrator | Tuesday 17 March 2026 00:46:24 +0000 (0:00:00.250) 0:00:31.236 ********* 2026-03-17 00:46:29.786715 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:29.786732 | orchestrator | 2026-03-17 00:46:29.786750 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:29.786768 | orchestrator | Tuesday 17 March 2026 00:46:25 +0000 (0:00:00.747) 0:00:31.983 ********* 2026-03-17 00:46:29.786786 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:29.786803 | orchestrator | 2026-03-17 00:46:29.786821 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:29.786838 | orchestrator | Tuesday 17 March 2026 00:46:25 +0000 (0:00:00.212) 0:00:32.195 ********* 2026-03-17 00:46:29.786856 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:29.786874 | orchestrator | 2026-03-17 00:46:29.786891 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-17 00:46:29.786918 | orchestrator | Tuesday 17 March 2026 00:46:26 +0000 (0:00:00.266) 0:00:32.462 ********* 2026-03-17 00:46:29.786936 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:29.786954 | orchestrator | 2026-03-17 00:46:29.786972 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-17 00:46:29.786988 | orchestrator | Tuesday 17 March 2026 00:46:26 +0000 (0:00:00.139) 0:00:32.602 ********* 2026-03-17 00:46:29.787003 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '13f697f5-12ba-5526-98d1-b1a9c265f800'}}) 2026-03-17 00:46:29.787019 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a0cc3c10-edeb-5a7b-849a-4273befffbf6'}}) 2026-03-17 00:46:29.787035 | orchestrator | 2026-03-17 00:46:29.787051 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-17 00:46:29.787066 | orchestrator | Tuesday 17 March 2026 00:46:26 +0000 (0:00:00.242) 0:00:32.844 ********* 2026-03-17 00:46:29.787082 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-13f697f5-12ba-5526-98d1-b1a9c265f800', 'data_vg': 'ceph-13f697f5-12ba-5526-98d1-b1a9c265f800'}) 2026-03-17 00:46:29.787100 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a0cc3c10-edeb-5a7b-849a-4273befffbf6', 'data_vg': 'ceph-a0cc3c10-edeb-5a7b-849a-4273befffbf6'}) 2026-03-17 00:46:29.787114 | orchestrator | 2026-03-17 00:46:29.787130 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-17 00:46:29.787146 | orchestrator | Tuesday 17 March 2026 00:46:28 +0000 (0:00:01.845) 0:00:34.690 ********* 2026-03-17 00:46:29.787163 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-13f697f5-12ba-5526-98d1-b1a9c265f800', 'data_vg': 'ceph-13f697f5-12ba-5526-98d1-b1a9c265f800'})  2026-03-17 00:46:29.787181 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a0cc3c10-edeb-5a7b-849a-4273befffbf6', 'data_vg': 'ceph-a0cc3c10-edeb-5a7b-849a-4273befffbf6'})  2026-03-17 00:46:29.787265 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:29.787288 | orchestrator | 2026-03-17 00:46:29.787306 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-17 00:46:29.787323 | orchestrator | Tuesday 17 March 2026 00:46:28 +0000 (0:00:00.163) 0:00:34.853 ********* 2026-03-17 00:46:29.787341 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-13f697f5-12ba-5526-98d1-b1a9c265f800', 'data_vg': 'ceph-13f697f5-12ba-5526-98d1-b1a9c265f800'}) 2026-03-17 00:46:29.787375 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a0cc3c10-edeb-5a7b-849a-4273befffbf6', 'data_vg': 'ceph-a0cc3c10-edeb-5a7b-849a-4273befffbf6'}) 2026-03-17 00:46:35.309319 | orchestrator | 2026-03-17 00:46:35.309403 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-17 00:46:35.309431 | orchestrator | Tuesday 17 March 2026 00:46:29 +0000 (0:00:01.345) 0:00:36.199 ********* 2026-03-17 00:46:35.309500 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-13f697f5-12ba-5526-98d1-b1a9c265f800', 'data_vg': 'ceph-13f697f5-12ba-5526-98d1-b1a9c265f800'})  2026-03-17 00:46:35.309510 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a0cc3c10-edeb-5a7b-849a-4273befffbf6', 'data_vg': 'ceph-a0cc3c10-edeb-5a7b-849a-4273befffbf6'})  2026-03-17 00:46:35.309521 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:35.309532 | orchestrator | 2026-03-17 00:46:35.309543 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-17 00:46:35.309553 | orchestrator | Tuesday 17 March 2026 00:46:29 +0000 (0:00:00.148) 0:00:36.347 ********* 2026-03-17 00:46:35.309563 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:35.309573 | orchestrator | 2026-03-17 00:46:35.309583 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-17 00:46:35.309593 | orchestrator | Tuesday 17 March 2026 00:46:30 +0000 (0:00:00.130) 0:00:36.477 ********* 2026-03-17 00:46:35.309603 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-13f697f5-12ba-5526-98d1-b1a9c265f800', 'data_vg': 'ceph-13f697f5-12ba-5526-98d1-b1a9c265f800'})  2026-03-17 00:46:35.309612 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a0cc3c10-edeb-5a7b-849a-4273befffbf6', 'data_vg': 'ceph-a0cc3c10-edeb-5a7b-849a-4273befffbf6'})  2026-03-17 00:46:35.309623 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:35.309632 | orchestrator | 2026-03-17 00:46:35.309642 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-17 00:46:35.309652 | orchestrator | Tuesday 17 March 2026 00:46:30 +0000 (0:00:00.146) 0:00:36.624 ********* 2026-03-17 00:46:35.309661 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:35.309671 | orchestrator | 2026-03-17 00:46:35.309682 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-17 00:46:35.309692 | orchestrator | Tuesday 17 March 2026 00:46:30 +0000 (0:00:00.125) 0:00:36.750 ********* 2026-03-17 00:46:35.309702 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-13f697f5-12ba-5526-98d1-b1a9c265f800', 'data_vg': 'ceph-13f697f5-12ba-5526-98d1-b1a9c265f800'})  2026-03-17 00:46:35.309713 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a0cc3c10-edeb-5a7b-849a-4273befffbf6', 'data_vg': 'ceph-a0cc3c10-edeb-5a7b-849a-4273befffbf6'})  2026-03-17 00:46:35.309724 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:35.309733 | orchestrator | 2026-03-17 00:46:35.309742 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-17 00:46:35.309767 | orchestrator | Tuesday 17 March 2026 00:46:30 +0000 (0:00:00.351) 0:00:37.101 ********* 2026-03-17 00:46:35.309778 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:35.309788 | orchestrator | 2026-03-17 00:46:35.309797 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-17 00:46:35.309807 | orchestrator | Tuesday 17 March 2026 00:46:30 +0000 (0:00:00.134) 0:00:37.236 ********* 2026-03-17 00:46:35.309818 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-13f697f5-12ba-5526-98d1-b1a9c265f800', 'data_vg': 'ceph-13f697f5-12ba-5526-98d1-b1a9c265f800'})  2026-03-17 00:46:35.309828 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a0cc3c10-edeb-5a7b-849a-4273befffbf6', 'data_vg': 'ceph-a0cc3c10-edeb-5a7b-849a-4273befffbf6'})  2026-03-17 00:46:35.309834 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:35.309840 | orchestrator | 2026-03-17 00:46:35.309847 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-17 00:46:35.309854 | orchestrator | Tuesday 17 March 2026 00:46:30 +0000 (0:00:00.147) 0:00:37.383 ********* 2026-03-17 00:46:35.309861 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:46:35.309869 | orchestrator | 2026-03-17 00:46:35.309876 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-17 00:46:35.309893 | orchestrator | Tuesday 17 March 2026 00:46:31 +0000 (0:00:00.144) 0:00:37.527 ********* 2026-03-17 00:46:35.309900 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-13f697f5-12ba-5526-98d1-b1a9c265f800', 'data_vg': 'ceph-13f697f5-12ba-5526-98d1-b1a9c265f800'})  2026-03-17 00:46:35.309907 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a0cc3c10-edeb-5a7b-849a-4273befffbf6', 'data_vg': 'ceph-a0cc3c10-edeb-5a7b-849a-4273befffbf6'})  2026-03-17 00:46:35.309914 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:35.309921 | orchestrator | 2026-03-17 00:46:35.309928 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-17 00:46:35.309935 | orchestrator | Tuesday 17 March 2026 00:46:31 +0000 (0:00:00.157) 0:00:37.685 ********* 2026-03-17 00:46:35.309942 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-13f697f5-12ba-5526-98d1-b1a9c265f800', 'data_vg': 'ceph-13f697f5-12ba-5526-98d1-b1a9c265f800'})  2026-03-17 00:46:35.309949 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a0cc3c10-edeb-5a7b-849a-4273befffbf6', 'data_vg': 'ceph-a0cc3c10-edeb-5a7b-849a-4273befffbf6'})  2026-03-17 00:46:35.309957 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:35.309964 | orchestrator | 2026-03-17 00:46:35.309971 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-17 00:46:35.309993 | orchestrator | Tuesday 17 March 2026 00:46:31 +0000 (0:00:00.140) 0:00:37.825 ********* 2026-03-17 00:46:35.310001 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-13f697f5-12ba-5526-98d1-b1a9c265f800', 'data_vg': 'ceph-13f697f5-12ba-5526-98d1-b1a9c265f800'})  2026-03-17 00:46:35.310008 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a0cc3c10-edeb-5a7b-849a-4273befffbf6', 'data_vg': 'ceph-a0cc3c10-edeb-5a7b-849a-4273befffbf6'})  2026-03-17 00:46:35.310050 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:35.310059 | orchestrator | 2026-03-17 00:46:35.310066 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-17 00:46:35.310074 | orchestrator | Tuesday 17 March 2026 00:46:31 +0000 (0:00:00.141) 0:00:37.967 ********* 2026-03-17 00:46:35.310081 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:35.310087 | orchestrator | 2026-03-17 00:46:35.310094 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-17 00:46:35.310101 | orchestrator | Tuesday 17 March 2026 00:46:31 +0000 (0:00:00.135) 0:00:38.103 ********* 2026-03-17 00:46:35.310108 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:35.310115 | orchestrator | 2026-03-17 00:46:35.310122 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-17 00:46:35.310129 | orchestrator | Tuesday 17 March 2026 00:46:31 +0000 (0:00:00.139) 0:00:38.242 ********* 2026-03-17 00:46:35.310136 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:35.310143 | orchestrator | 2026-03-17 00:46:35.310150 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-17 00:46:35.310156 | orchestrator | Tuesday 17 March 2026 00:46:31 +0000 (0:00:00.137) 0:00:38.380 ********* 2026-03-17 00:46:35.310164 | orchestrator | ok: [testbed-node-4] => { 2026-03-17 00:46:35.310171 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-17 00:46:35.310178 | orchestrator | } 2026-03-17 00:46:35.310185 | orchestrator | 2026-03-17 00:46:35.310192 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-17 00:46:35.310199 | orchestrator | Tuesday 17 March 2026 00:46:32 +0000 (0:00:00.163) 0:00:38.544 ********* 2026-03-17 00:46:35.310206 | orchestrator | ok: [testbed-node-4] => { 2026-03-17 00:46:35.310265 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-17 00:46:35.310272 | orchestrator | } 2026-03-17 00:46:35.310278 | orchestrator | 2026-03-17 00:46:35.310284 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-17 00:46:35.310290 | orchestrator | Tuesday 17 March 2026 00:46:32 +0000 (0:00:00.134) 0:00:38.679 ********* 2026-03-17 00:46:35.310303 | orchestrator | ok: [testbed-node-4] => { 2026-03-17 00:46:35.310310 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-17 00:46:35.310316 | orchestrator | } 2026-03-17 00:46:35.310322 | orchestrator | 2026-03-17 00:46:35.310328 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-17 00:46:35.310335 | orchestrator | Tuesday 17 March 2026 00:46:32 +0000 (0:00:00.348) 0:00:39.027 ********* 2026-03-17 00:46:35.310341 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:46:35.310347 | orchestrator | 2026-03-17 00:46:35.310353 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-17 00:46:35.310359 | orchestrator | Tuesday 17 March 2026 00:46:33 +0000 (0:00:00.551) 0:00:39.578 ********* 2026-03-17 00:46:35.310366 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:46:35.310372 | orchestrator | 2026-03-17 00:46:35.310378 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-17 00:46:35.310384 | orchestrator | Tuesday 17 March 2026 00:46:33 +0000 (0:00:00.527) 0:00:40.106 ********* 2026-03-17 00:46:35.310391 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:46:35.310397 | orchestrator | 2026-03-17 00:46:35.310403 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-17 00:46:35.310409 | orchestrator | Tuesday 17 March 2026 00:46:34 +0000 (0:00:00.537) 0:00:40.644 ********* 2026-03-17 00:46:35.310416 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:46:35.310422 | orchestrator | 2026-03-17 00:46:35.310428 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-17 00:46:35.310434 | orchestrator | Tuesday 17 March 2026 00:46:34 +0000 (0:00:00.145) 0:00:40.790 ********* 2026-03-17 00:46:35.310440 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:35.310446 | orchestrator | 2026-03-17 00:46:35.310452 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-17 00:46:35.310459 | orchestrator | Tuesday 17 March 2026 00:46:34 +0000 (0:00:00.105) 0:00:40.895 ********* 2026-03-17 00:46:35.310465 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:35.310471 | orchestrator | 2026-03-17 00:46:35.310477 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-17 00:46:35.310483 | orchestrator | Tuesday 17 March 2026 00:46:34 +0000 (0:00:00.117) 0:00:41.013 ********* 2026-03-17 00:46:35.310489 | orchestrator | ok: [testbed-node-4] => { 2026-03-17 00:46:35.310495 | orchestrator |  "vgs_report": { 2026-03-17 00:46:35.310501 | orchestrator |  "vg": [] 2026-03-17 00:46:35.310508 | orchestrator |  } 2026-03-17 00:46:35.310514 | orchestrator | } 2026-03-17 00:46:35.310520 | orchestrator | 2026-03-17 00:46:35.310526 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-17 00:46:35.310532 | orchestrator | Tuesday 17 March 2026 00:46:34 +0000 (0:00:00.147) 0:00:41.161 ********* 2026-03-17 00:46:35.310538 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:35.310544 | orchestrator | 2026-03-17 00:46:35.310550 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-17 00:46:35.310557 | orchestrator | Tuesday 17 March 2026 00:46:34 +0000 (0:00:00.137) 0:00:41.299 ********* 2026-03-17 00:46:35.310563 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:35.310569 | orchestrator | 2026-03-17 00:46:35.310575 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-17 00:46:35.310581 | orchestrator | Tuesday 17 March 2026 00:46:35 +0000 (0:00:00.137) 0:00:41.436 ********* 2026-03-17 00:46:35.310588 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:35.310594 | orchestrator | 2026-03-17 00:46:35.310600 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-17 00:46:35.310614 | orchestrator | Tuesday 17 March 2026 00:46:35 +0000 (0:00:00.153) 0:00:41.590 ********* 2026-03-17 00:46:35.310621 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:35.310627 | orchestrator | 2026-03-17 00:46:35.310640 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-17 00:46:40.012031 | orchestrator | Tuesday 17 March 2026 00:46:35 +0000 (0:00:00.137) 0:00:41.727 ********* 2026-03-17 00:46:40.012151 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:40.012166 | orchestrator | 2026-03-17 00:46:40.012178 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-17 00:46:40.012189 | orchestrator | Tuesday 17 March 2026 00:46:35 +0000 (0:00:00.309) 0:00:42.036 ********* 2026-03-17 00:46:40.012199 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:40.012208 | orchestrator | 2026-03-17 00:46:40.012218 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-17 00:46:40.012274 | orchestrator | Tuesday 17 March 2026 00:46:35 +0000 (0:00:00.138) 0:00:42.175 ********* 2026-03-17 00:46:40.012285 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:40.012295 | orchestrator | 2026-03-17 00:46:40.012304 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-17 00:46:40.012314 | orchestrator | Tuesday 17 March 2026 00:46:35 +0000 (0:00:00.145) 0:00:42.321 ********* 2026-03-17 00:46:40.012324 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:40.012333 | orchestrator | 2026-03-17 00:46:40.012343 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-17 00:46:40.012353 | orchestrator | Tuesday 17 March 2026 00:46:36 +0000 (0:00:00.143) 0:00:42.465 ********* 2026-03-17 00:46:40.012362 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:40.012372 | orchestrator | 2026-03-17 00:46:40.012381 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-17 00:46:40.012391 | orchestrator | Tuesday 17 March 2026 00:46:36 +0000 (0:00:00.142) 0:00:42.607 ********* 2026-03-17 00:46:40.012401 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:40.012410 | orchestrator | 2026-03-17 00:46:40.012420 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-17 00:46:40.012429 | orchestrator | Tuesday 17 March 2026 00:46:36 +0000 (0:00:00.138) 0:00:42.746 ********* 2026-03-17 00:46:40.012439 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:40.012448 | orchestrator | 2026-03-17 00:46:40.012458 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-17 00:46:40.012467 | orchestrator | Tuesday 17 March 2026 00:46:36 +0000 (0:00:00.134) 0:00:42.881 ********* 2026-03-17 00:46:40.012477 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:40.012487 | orchestrator | 2026-03-17 00:46:40.012496 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-17 00:46:40.012506 | orchestrator | Tuesday 17 March 2026 00:46:36 +0000 (0:00:00.140) 0:00:43.022 ********* 2026-03-17 00:46:40.012515 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:40.012525 | orchestrator | 2026-03-17 00:46:40.012535 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-17 00:46:40.012544 | orchestrator | Tuesday 17 March 2026 00:46:36 +0000 (0:00:00.144) 0:00:43.166 ********* 2026-03-17 00:46:40.012554 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:40.012563 | orchestrator | 2026-03-17 00:46:40.012575 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-17 00:46:40.012601 | orchestrator | Tuesday 17 March 2026 00:46:36 +0000 (0:00:00.136) 0:00:43.303 ********* 2026-03-17 00:46:40.012614 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-13f697f5-12ba-5526-98d1-b1a9c265f800', 'data_vg': 'ceph-13f697f5-12ba-5526-98d1-b1a9c265f800'})  2026-03-17 00:46:40.012627 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a0cc3c10-edeb-5a7b-849a-4273befffbf6', 'data_vg': 'ceph-a0cc3c10-edeb-5a7b-849a-4273befffbf6'})  2026-03-17 00:46:40.012638 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:40.012650 | orchestrator | 2026-03-17 00:46:40.012661 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-17 00:46:40.012672 | orchestrator | Tuesday 17 March 2026 00:46:37 +0000 (0:00:00.175) 0:00:43.478 ********* 2026-03-17 00:46:40.012683 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-13f697f5-12ba-5526-98d1-b1a9c265f800', 'data_vg': 'ceph-13f697f5-12ba-5526-98d1-b1a9c265f800'})  2026-03-17 00:46:40.012702 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a0cc3c10-edeb-5a7b-849a-4273befffbf6', 'data_vg': 'ceph-a0cc3c10-edeb-5a7b-849a-4273befffbf6'})  2026-03-17 00:46:40.012713 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:40.012724 | orchestrator | 2026-03-17 00:46:40.012736 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-17 00:46:40.012747 | orchestrator | Tuesday 17 March 2026 00:46:37 +0000 (0:00:00.152) 0:00:43.630 ********* 2026-03-17 00:46:40.012758 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-13f697f5-12ba-5526-98d1-b1a9c265f800', 'data_vg': 'ceph-13f697f5-12ba-5526-98d1-b1a9c265f800'})  2026-03-17 00:46:40.012769 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a0cc3c10-edeb-5a7b-849a-4273befffbf6', 'data_vg': 'ceph-a0cc3c10-edeb-5a7b-849a-4273befffbf6'})  2026-03-17 00:46:40.012780 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:40.012791 | orchestrator | 2026-03-17 00:46:40.012803 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-17 00:46:40.012814 | orchestrator | Tuesday 17 March 2026 00:46:37 +0000 (0:00:00.349) 0:00:43.980 ********* 2026-03-17 00:46:40.012825 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-13f697f5-12ba-5526-98d1-b1a9c265f800', 'data_vg': 'ceph-13f697f5-12ba-5526-98d1-b1a9c265f800'})  2026-03-17 00:46:40.012837 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a0cc3c10-edeb-5a7b-849a-4273befffbf6', 'data_vg': 'ceph-a0cc3c10-edeb-5a7b-849a-4273befffbf6'})  2026-03-17 00:46:40.012849 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:40.012860 | orchestrator | 2026-03-17 00:46:40.012889 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-17 00:46:40.012901 | orchestrator | Tuesday 17 March 2026 00:46:37 +0000 (0:00:00.142) 0:00:44.123 ********* 2026-03-17 00:46:40.012913 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-13f697f5-12ba-5526-98d1-b1a9c265f800', 'data_vg': 'ceph-13f697f5-12ba-5526-98d1-b1a9c265f800'})  2026-03-17 00:46:40.012925 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a0cc3c10-edeb-5a7b-849a-4273befffbf6', 'data_vg': 'ceph-a0cc3c10-edeb-5a7b-849a-4273befffbf6'})  2026-03-17 00:46:40.012936 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:40.012946 | orchestrator | 2026-03-17 00:46:40.012956 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-17 00:46:40.012965 | orchestrator | Tuesday 17 March 2026 00:46:37 +0000 (0:00:00.133) 0:00:44.257 ********* 2026-03-17 00:46:40.012975 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-13f697f5-12ba-5526-98d1-b1a9c265f800', 'data_vg': 'ceph-13f697f5-12ba-5526-98d1-b1a9c265f800'})  2026-03-17 00:46:40.012985 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a0cc3c10-edeb-5a7b-849a-4273befffbf6', 'data_vg': 'ceph-a0cc3c10-edeb-5a7b-849a-4273befffbf6'})  2026-03-17 00:46:40.012995 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:40.013004 | orchestrator | 2026-03-17 00:46:40.013014 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-17 00:46:40.013024 | orchestrator | Tuesday 17 March 2026 00:46:37 +0000 (0:00:00.149) 0:00:44.406 ********* 2026-03-17 00:46:40.013034 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-13f697f5-12ba-5526-98d1-b1a9c265f800', 'data_vg': 'ceph-13f697f5-12ba-5526-98d1-b1a9c265f800'})  2026-03-17 00:46:40.013043 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a0cc3c10-edeb-5a7b-849a-4273befffbf6', 'data_vg': 'ceph-a0cc3c10-edeb-5a7b-849a-4273befffbf6'})  2026-03-17 00:46:40.013053 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:40.013062 | orchestrator | 2026-03-17 00:46:40.013072 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-17 00:46:40.013082 | orchestrator | Tuesday 17 March 2026 00:46:38 +0000 (0:00:00.154) 0:00:44.561 ********* 2026-03-17 00:46:40.013098 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-13f697f5-12ba-5526-98d1-b1a9c265f800', 'data_vg': 'ceph-13f697f5-12ba-5526-98d1-b1a9c265f800'})  2026-03-17 00:46:40.013124 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a0cc3c10-edeb-5a7b-849a-4273befffbf6', 'data_vg': 'ceph-a0cc3c10-edeb-5a7b-849a-4273befffbf6'})  2026-03-17 00:46:40.013147 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:40.013162 | orchestrator | 2026-03-17 00:46:40.013178 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-17 00:46:40.013194 | orchestrator | Tuesday 17 March 2026 00:46:38 +0000 (0:00:00.156) 0:00:44.717 ********* 2026-03-17 00:46:40.013211 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:46:40.013222 | orchestrator | 2026-03-17 00:46:40.013283 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-17 00:46:40.013293 | orchestrator | Tuesday 17 March 2026 00:46:38 +0000 (0:00:00.552) 0:00:45.269 ********* 2026-03-17 00:46:40.013303 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:46:40.013312 | orchestrator | 2026-03-17 00:46:40.013322 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-17 00:46:40.013331 | orchestrator | Tuesday 17 March 2026 00:46:39 +0000 (0:00:00.553) 0:00:45.822 ********* 2026-03-17 00:46:40.013341 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:46:40.013351 | orchestrator | 2026-03-17 00:46:40.013360 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-17 00:46:40.013370 | orchestrator | Tuesday 17 March 2026 00:46:39 +0000 (0:00:00.142) 0:00:45.964 ********* 2026-03-17 00:46:40.013380 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-13f697f5-12ba-5526-98d1-b1a9c265f800', 'vg_name': 'ceph-13f697f5-12ba-5526-98d1-b1a9c265f800'}) 2026-03-17 00:46:40.013391 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-a0cc3c10-edeb-5a7b-849a-4273befffbf6', 'vg_name': 'ceph-a0cc3c10-edeb-5a7b-849a-4273befffbf6'}) 2026-03-17 00:46:40.013400 | orchestrator | 2026-03-17 00:46:40.013410 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-17 00:46:40.013420 | orchestrator | Tuesday 17 March 2026 00:46:39 +0000 (0:00:00.163) 0:00:46.128 ********* 2026-03-17 00:46:40.013430 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-13f697f5-12ba-5526-98d1-b1a9c265f800', 'data_vg': 'ceph-13f697f5-12ba-5526-98d1-b1a9c265f800'})  2026-03-17 00:46:40.013439 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a0cc3c10-edeb-5a7b-849a-4273befffbf6', 'data_vg': 'ceph-a0cc3c10-edeb-5a7b-849a-4273befffbf6'})  2026-03-17 00:46:40.013449 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:40.013458 | orchestrator | 2026-03-17 00:46:40.013468 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-17 00:46:40.013478 | orchestrator | Tuesday 17 March 2026 00:46:39 +0000 (0:00:00.153) 0:00:46.281 ********* 2026-03-17 00:46:40.013487 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-13f697f5-12ba-5526-98d1-b1a9c265f800', 'data_vg': 'ceph-13f697f5-12ba-5526-98d1-b1a9c265f800'})  2026-03-17 00:46:40.013505 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a0cc3c10-edeb-5a7b-849a-4273befffbf6', 'data_vg': 'ceph-a0cc3c10-edeb-5a7b-849a-4273befffbf6'})  2026-03-17 00:46:45.909892 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:45.910001 | orchestrator | 2026-03-17 00:46:45.910066 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-17 00:46:45.910083 | orchestrator | Tuesday 17 March 2026 00:46:40 +0000 (0:00:00.149) 0:00:46.431 ********* 2026-03-17 00:46:45.910095 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-13f697f5-12ba-5526-98d1-b1a9c265f800', 'data_vg': 'ceph-13f697f5-12ba-5526-98d1-b1a9c265f800'})  2026-03-17 00:46:45.910108 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a0cc3c10-edeb-5a7b-849a-4273befffbf6', 'data_vg': 'ceph-a0cc3c10-edeb-5a7b-849a-4273befffbf6'})  2026-03-17 00:46:45.910119 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:45.910130 | orchestrator | 2026-03-17 00:46:45.910142 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-17 00:46:45.910176 | orchestrator | Tuesday 17 March 2026 00:46:40 +0000 (0:00:00.152) 0:00:46.583 ********* 2026-03-17 00:46:45.910187 | orchestrator | ok: [testbed-node-4] => { 2026-03-17 00:46:45.910198 | orchestrator |  "lvm_report": { 2026-03-17 00:46:45.910210 | orchestrator |  "lv": [ 2026-03-17 00:46:45.910220 | orchestrator |  { 2026-03-17 00:46:45.910230 | orchestrator |  "lv_name": "osd-block-13f697f5-12ba-5526-98d1-b1a9c265f800", 2026-03-17 00:46:45.910241 | orchestrator |  "vg_name": "ceph-13f697f5-12ba-5526-98d1-b1a9c265f800" 2026-03-17 00:46:45.910310 | orchestrator |  }, 2026-03-17 00:46:45.910319 | orchestrator |  { 2026-03-17 00:46:45.910329 | orchestrator |  "lv_name": "osd-block-a0cc3c10-edeb-5a7b-849a-4273befffbf6", 2026-03-17 00:46:45.910338 | orchestrator |  "vg_name": "ceph-a0cc3c10-edeb-5a7b-849a-4273befffbf6" 2026-03-17 00:46:45.910347 | orchestrator |  } 2026-03-17 00:46:45.910357 | orchestrator |  ], 2026-03-17 00:46:45.910367 | orchestrator |  "pv": [ 2026-03-17 00:46:45.910375 | orchestrator |  { 2026-03-17 00:46:45.910385 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-17 00:46:45.910394 | orchestrator |  "vg_name": "ceph-13f697f5-12ba-5526-98d1-b1a9c265f800" 2026-03-17 00:46:45.910403 | orchestrator |  }, 2026-03-17 00:46:45.910411 | orchestrator |  { 2026-03-17 00:46:45.910419 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-17 00:46:45.910428 | orchestrator |  "vg_name": "ceph-a0cc3c10-edeb-5a7b-849a-4273befffbf6" 2026-03-17 00:46:45.910437 | orchestrator |  } 2026-03-17 00:46:45.910448 | orchestrator |  ] 2026-03-17 00:46:45.910456 | orchestrator |  } 2026-03-17 00:46:45.910466 | orchestrator | } 2026-03-17 00:46:45.910476 | orchestrator | 2026-03-17 00:46:45.910486 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-17 00:46:45.910495 | orchestrator | 2026-03-17 00:46:45.910504 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-17 00:46:45.910513 | orchestrator | Tuesday 17 March 2026 00:46:40 +0000 (0:00:00.498) 0:00:47.081 ********* 2026-03-17 00:46:45.910523 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-17 00:46:45.910533 | orchestrator | 2026-03-17 00:46:45.910542 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-17 00:46:45.910552 | orchestrator | Tuesday 17 March 2026 00:46:40 +0000 (0:00:00.240) 0:00:47.321 ********* 2026-03-17 00:46:45.910562 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:46:45.910572 | orchestrator | 2026-03-17 00:46:45.910582 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:45.910591 | orchestrator | Tuesday 17 March 2026 00:46:41 +0000 (0:00:00.242) 0:00:47.564 ********* 2026-03-17 00:46:45.910601 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-17 00:46:45.910610 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-17 00:46:45.910618 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-17 00:46:45.910628 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-17 00:46:45.910637 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-17 00:46:45.910647 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-17 00:46:45.910656 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-17 00:46:45.910666 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-17 00:46:45.910675 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-17 00:46:45.910685 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-17 00:46:45.910702 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-17 00:46:45.910710 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-17 00:46:45.910719 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-17 00:46:45.910728 | orchestrator | 2026-03-17 00:46:45.910736 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:45.910748 | orchestrator | Tuesday 17 March 2026 00:46:41 +0000 (0:00:00.388) 0:00:47.952 ********* 2026-03-17 00:46:45.910757 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:46:45.910765 | orchestrator | 2026-03-17 00:46:45.910774 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:45.910783 | orchestrator | Tuesday 17 March 2026 00:46:41 +0000 (0:00:00.198) 0:00:48.150 ********* 2026-03-17 00:46:45.910791 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:46:45.910800 | orchestrator | 2026-03-17 00:46:45.910808 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:45.910835 | orchestrator | Tuesday 17 March 2026 00:46:41 +0000 (0:00:00.199) 0:00:48.350 ********* 2026-03-17 00:46:45.910843 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:46:45.910851 | orchestrator | 2026-03-17 00:46:45.910858 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:45.910866 | orchestrator | Tuesday 17 March 2026 00:46:42 +0000 (0:00:00.211) 0:00:48.561 ********* 2026-03-17 00:46:45.910874 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:46:45.910882 | orchestrator | 2026-03-17 00:46:45.910890 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:45.910897 | orchestrator | Tuesday 17 March 2026 00:46:42 +0000 (0:00:00.195) 0:00:48.757 ********* 2026-03-17 00:46:45.910905 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:46:45.910913 | orchestrator | 2026-03-17 00:46:45.910922 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:45.910931 | orchestrator | Tuesday 17 March 2026 00:46:42 +0000 (0:00:00.564) 0:00:49.321 ********* 2026-03-17 00:46:45.910940 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:46:45.910948 | orchestrator | 2026-03-17 00:46:45.910957 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:45.910965 | orchestrator | Tuesday 17 March 2026 00:46:43 +0000 (0:00:00.183) 0:00:49.505 ********* 2026-03-17 00:46:45.910972 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:46:45.910979 | orchestrator | 2026-03-17 00:46:45.910987 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:45.910995 | orchestrator | Tuesday 17 March 2026 00:46:43 +0000 (0:00:00.205) 0:00:49.711 ********* 2026-03-17 00:46:45.911004 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:46:45.911012 | orchestrator | 2026-03-17 00:46:45.911021 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:45.911029 | orchestrator | Tuesday 17 March 2026 00:46:43 +0000 (0:00:00.187) 0:00:49.898 ********* 2026-03-17 00:46:45.911038 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f) 2026-03-17 00:46:45.911047 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f) 2026-03-17 00:46:45.911056 | orchestrator | 2026-03-17 00:46:45.911065 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:45.911073 | orchestrator | Tuesday 17 March 2026 00:46:43 +0000 (0:00:00.402) 0:00:50.300 ********* 2026-03-17 00:46:45.911127 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a7deaf5a-cd70-43cd-92ab-ee3441c5e54f) 2026-03-17 00:46:45.911135 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a7deaf5a-cd70-43cd-92ab-ee3441c5e54f) 2026-03-17 00:46:45.911143 | orchestrator | 2026-03-17 00:46:45.911151 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:45.911169 | orchestrator | Tuesday 17 March 2026 00:46:44 +0000 (0:00:00.420) 0:00:50.721 ********* 2026-03-17 00:46:45.911178 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_dd7becb9-0584-4efc-8944-d51272ed61fa) 2026-03-17 00:46:45.911187 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_dd7becb9-0584-4efc-8944-d51272ed61fa) 2026-03-17 00:46:45.911196 | orchestrator | 2026-03-17 00:46:45.911204 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:45.911211 | orchestrator | Tuesday 17 March 2026 00:46:44 +0000 (0:00:00.431) 0:00:51.152 ********* 2026-03-17 00:46:45.911218 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0a90ba68-315a-4ce4-a803-8ffceb4dacc1) 2026-03-17 00:46:45.911226 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0a90ba68-315a-4ce4-a803-8ffceb4dacc1) 2026-03-17 00:46:45.911234 | orchestrator | 2026-03-17 00:46:45.911242 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:46:45.911271 | orchestrator | Tuesday 17 March 2026 00:46:45 +0000 (0:00:00.413) 0:00:51.566 ********* 2026-03-17 00:46:45.911280 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-17 00:46:45.911289 | orchestrator | 2026-03-17 00:46:45.911297 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:45.911305 | orchestrator | Tuesday 17 March 2026 00:46:45 +0000 (0:00:00.336) 0:00:51.902 ********* 2026-03-17 00:46:45.911314 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-17 00:46:45.911322 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-17 00:46:45.911329 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-17 00:46:45.911337 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-17 00:46:45.911345 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-17 00:46:45.911352 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-17 00:46:45.911360 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-17 00:46:45.911367 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-17 00:46:45.911375 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-17 00:46:45.911384 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-17 00:46:45.911392 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-17 00:46:45.911408 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-17 00:46:54.875907 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-17 00:46:54.876018 | orchestrator | 2026-03-17 00:46:54.876036 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:54.876048 | orchestrator | Tuesday 17 March 2026 00:46:45 +0000 (0:00:00.416) 0:00:52.319 ********* 2026-03-17 00:46:54.876060 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:46:54.876071 | orchestrator | 2026-03-17 00:46:54.876082 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:54.876093 | orchestrator | Tuesday 17 March 2026 00:46:46 +0000 (0:00:00.234) 0:00:52.553 ********* 2026-03-17 00:46:54.876104 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:46:54.876115 | orchestrator | 2026-03-17 00:46:54.876126 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:54.876137 | orchestrator | Tuesday 17 March 2026 00:46:46 +0000 (0:00:00.643) 0:00:53.196 ********* 2026-03-17 00:46:54.876148 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:46:54.876180 | orchestrator | 2026-03-17 00:46:54.876192 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:54.876203 | orchestrator | Tuesday 17 March 2026 00:46:46 +0000 (0:00:00.217) 0:00:53.414 ********* 2026-03-17 00:46:54.876214 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:46:54.876225 | orchestrator | 2026-03-17 00:46:54.876235 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:54.876246 | orchestrator | Tuesday 17 March 2026 00:46:47 +0000 (0:00:00.194) 0:00:53.609 ********* 2026-03-17 00:46:54.876257 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:46:54.876268 | orchestrator | 2026-03-17 00:46:54.876346 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:54.876359 | orchestrator | Tuesday 17 March 2026 00:46:47 +0000 (0:00:00.199) 0:00:53.808 ********* 2026-03-17 00:46:54.876370 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:46:54.876381 | orchestrator | 2026-03-17 00:46:54.876391 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:54.876402 | orchestrator | Tuesday 17 March 2026 00:46:47 +0000 (0:00:00.204) 0:00:54.013 ********* 2026-03-17 00:46:54.876413 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:46:54.876423 | orchestrator | 2026-03-17 00:46:54.876434 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:54.876445 | orchestrator | Tuesday 17 March 2026 00:46:47 +0000 (0:00:00.203) 0:00:54.216 ********* 2026-03-17 00:46:54.876455 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:46:54.876466 | orchestrator | 2026-03-17 00:46:54.876477 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:54.876487 | orchestrator | Tuesday 17 March 2026 00:46:47 +0000 (0:00:00.197) 0:00:54.414 ********* 2026-03-17 00:46:54.876498 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-17 00:46:54.876523 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-17 00:46:54.876535 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-17 00:46:54.876545 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-17 00:46:54.876556 | orchestrator | 2026-03-17 00:46:54.876567 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:54.876578 | orchestrator | Tuesday 17 March 2026 00:46:48 +0000 (0:00:00.669) 0:00:55.084 ********* 2026-03-17 00:46:54.876588 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:46:54.876599 | orchestrator | 2026-03-17 00:46:54.876610 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:54.876621 | orchestrator | Tuesday 17 March 2026 00:46:48 +0000 (0:00:00.201) 0:00:55.285 ********* 2026-03-17 00:46:54.876632 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:46:54.876643 | orchestrator | 2026-03-17 00:46:54.876653 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:54.876664 | orchestrator | Tuesday 17 March 2026 00:46:49 +0000 (0:00:00.164) 0:00:55.450 ********* 2026-03-17 00:46:54.876677 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:46:54.876696 | orchestrator | 2026-03-17 00:46:54.876723 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:46:54.876742 | orchestrator | Tuesday 17 March 2026 00:46:49 +0000 (0:00:00.190) 0:00:55.640 ********* 2026-03-17 00:46:54.876759 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:46:54.876776 | orchestrator | 2026-03-17 00:46:54.876793 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-17 00:46:54.876809 | orchestrator | Tuesday 17 March 2026 00:46:49 +0000 (0:00:00.196) 0:00:55.837 ********* 2026-03-17 00:46:54.876823 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:46:54.876839 | orchestrator | 2026-03-17 00:46:54.876855 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-17 00:46:54.876872 | orchestrator | Tuesday 17 March 2026 00:46:49 +0000 (0:00:00.229) 0:00:56.067 ********* 2026-03-17 00:46:54.876890 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6d2c3af9-2510-58af-8cf3-0edda6a2b7a0'}}) 2026-03-17 00:46:54.876921 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bc85b6b7-69fe-55db-81a6-3a78775dfc6c'}}) 2026-03-17 00:46:54.876941 | orchestrator | 2026-03-17 00:46:54.876960 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-17 00:46:54.876978 | orchestrator | Tuesday 17 March 2026 00:46:49 +0000 (0:00:00.197) 0:00:56.264 ********* 2026-03-17 00:46:54.876998 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0', 'data_vg': 'ceph-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0'}) 2026-03-17 00:46:54.877018 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-bc85b6b7-69fe-55db-81a6-3a78775dfc6c', 'data_vg': 'ceph-bc85b6b7-69fe-55db-81a6-3a78775dfc6c'}) 2026-03-17 00:46:54.877036 | orchestrator | 2026-03-17 00:46:54.877055 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-17 00:46:54.877108 | orchestrator | Tuesday 17 March 2026 00:46:51 +0000 (0:00:01.917) 0:00:58.182 ********* 2026-03-17 00:46:54.877129 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0', 'data_vg': 'ceph-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0'})  2026-03-17 00:46:54.877147 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc85b6b7-69fe-55db-81a6-3a78775dfc6c', 'data_vg': 'ceph-bc85b6b7-69fe-55db-81a6-3a78775dfc6c'})  2026-03-17 00:46:54.877163 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:46:54.877181 | orchestrator | 2026-03-17 00:46:54.877200 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-17 00:46:54.877216 | orchestrator | Tuesday 17 March 2026 00:46:51 +0000 (0:00:00.141) 0:00:58.323 ********* 2026-03-17 00:46:54.877234 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0', 'data_vg': 'ceph-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0'}) 2026-03-17 00:46:54.877251 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-bc85b6b7-69fe-55db-81a6-3a78775dfc6c', 'data_vg': 'ceph-bc85b6b7-69fe-55db-81a6-3a78775dfc6c'}) 2026-03-17 00:46:54.877269 | orchestrator | 2026-03-17 00:46:54.877315 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-17 00:46:54.877332 | orchestrator | Tuesday 17 March 2026 00:46:53 +0000 (0:00:01.406) 0:00:59.730 ********* 2026-03-17 00:46:54.877350 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0', 'data_vg': 'ceph-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0'})  2026-03-17 00:46:54.877368 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc85b6b7-69fe-55db-81a6-3a78775dfc6c', 'data_vg': 'ceph-bc85b6b7-69fe-55db-81a6-3a78775dfc6c'})  2026-03-17 00:46:54.877386 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:46:54.877405 | orchestrator | 2026-03-17 00:46:54.877424 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-17 00:46:54.877442 | orchestrator | Tuesday 17 March 2026 00:46:53 +0000 (0:00:00.173) 0:00:59.903 ********* 2026-03-17 00:46:54.877461 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:46:54.877479 | orchestrator | 2026-03-17 00:46:54.877497 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-17 00:46:54.877511 | orchestrator | Tuesday 17 March 2026 00:46:53 +0000 (0:00:00.142) 0:01:00.046 ********* 2026-03-17 00:46:54.877522 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0', 'data_vg': 'ceph-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0'})  2026-03-17 00:46:54.877542 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc85b6b7-69fe-55db-81a6-3a78775dfc6c', 'data_vg': 'ceph-bc85b6b7-69fe-55db-81a6-3a78775dfc6c'})  2026-03-17 00:46:54.877554 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:46:54.877564 | orchestrator | 2026-03-17 00:46:54.877575 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-17 00:46:54.877586 | orchestrator | Tuesday 17 March 2026 00:46:53 +0000 (0:00:00.145) 0:01:00.191 ********* 2026-03-17 00:46:54.877606 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:46:54.877617 | orchestrator | 2026-03-17 00:46:54.877628 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-17 00:46:54.877638 | orchestrator | Tuesday 17 March 2026 00:46:53 +0000 (0:00:00.131) 0:01:00.323 ********* 2026-03-17 00:46:54.877649 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0', 'data_vg': 'ceph-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0'})  2026-03-17 00:46:54.877660 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc85b6b7-69fe-55db-81a6-3a78775dfc6c', 'data_vg': 'ceph-bc85b6b7-69fe-55db-81a6-3a78775dfc6c'})  2026-03-17 00:46:54.877670 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:46:54.877681 | orchestrator | 2026-03-17 00:46:54.877692 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-17 00:46:54.877702 | orchestrator | Tuesday 17 March 2026 00:46:54 +0000 (0:00:00.159) 0:01:00.483 ********* 2026-03-17 00:46:54.877713 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:46:54.877724 | orchestrator | 2026-03-17 00:46:54.877734 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-17 00:46:54.877745 | orchestrator | Tuesday 17 March 2026 00:46:54 +0000 (0:00:00.139) 0:01:00.623 ********* 2026-03-17 00:46:54.877756 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0', 'data_vg': 'ceph-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0'})  2026-03-17 00:46:54.877767 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc85b6b7-69fe-55db-81a6-3a78775dfc6c', 'data_vg': 'ceph-bc85b6b7-69fe-55db-81a6-3a78775dfc6c'})  2026-03-17 00:46:54.877777 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:46:54.877788 | orchestrator | 2026-03-17 00:46:54.877799 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-17 00:46:54.877815 | orchestrator | Tuesday 17 March 2026 00:46:54 +0000 (0:00:00.150) 0:01:00.773 ********* 2026-03-17 00:46:54.877833 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:46:54.877852 | orchestrator | 2026-03-17 00:46:54.877870 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-17 00:46:54.877889 | orchestrator | Tuesday 17 March 2026 00:46:54 +0000 (0:00:00.365) 0:01:01.138 ********* 2026-03-17 00:46:54.877925 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0', 'data_vg': 'ceph-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0'})  2026-03-17 00:47:00.919283 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc85b6b7-69fe-55db-81a6-3a78775dfc6c', 'data_vg': 'ceph-bc85b6b7-69fe-55db-81a6-3a78775dfc6c'})  2026-03-17 00:47:00.919421 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:00.919441 | orchestrator | 2026-03-17 00:47:00.919455 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-17 00:47:00.919471 | orchestrator | Tuesday 17 March 2026 00:46:54 +0000 (0:00:00.156) 0:01:01.295 ********* 2026-03-17 00:47:00.919480 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0', 'data_vg': 'ceph-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0'})  2026-03-17 00:47:00.919488 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc85b6b7-69fe-55db-81a6-3a78775dfc6c', 'data_vg': 'ceph-bc85b6b7-69fe-55db-81a6-3a78775dfc6c'})  2026-03-17 00:47:00.919495 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:00.919502 | orchestrator | 2026-03-17 00:47:00.919509 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-17 00:47:00.919517 | orchestrator | Tuesday 17 March 2026 00:46:55 +0000 (0:00:00.148) 0:01:01.443 ********* 2026-03-17 00:47:00.919523 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0', 'data_vg': 'ceph-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0'})  2026-03-17 00:47:00.919531 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc85b6b7-69fe-55db-81a6-3a78775dfc6c', 'data_vg': 'ceph-bc85b6b7-69fe-55db-81a6-3a78775dfc6c'})  2026-03-17 00:47:00.919558 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:00.919565 | orchestrator | 2026-03-17 00:47:00.919572 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-17 00:47:00.919579 | orchestrator | Tuesday 17 March 2026 00:46:55 +0000 (0:00:00.173) 0:01:01.616 ********* 2026-03-17 00:47:00.919585 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:00.919592 | orchestrator | 2026-03-17 00:47:00.919599 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-17 00:47:00.919605 | orchestrator | Tuesday 17 March 2026 00:46:55 +0000 (0:00:00.132) 0:01:01.749 ********* 2026-03-17 00:47:00.919612 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:00.919619 | orchestrator | 2026-03-17 00:47:00.919626 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-17 00:47:00.919632 | orchestrator | Tuesday 17 March 2026 00:46:55 +0000 (0:00:00.134) 0:01:01.884 ********* 2026-03-17 00:47:00.919639 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:00.919646 | orchestrator | 2026-03-17 00:47:00.919653 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-17 00:47:00.919659 | orchestrator | Tuesday 17 March 2026 00:46:55 +0000 (0:00:00.138) 0:01:02.022 ********* 2026-03-17 00:47:00.919666 | orchestrator | ok: [testbed-node-5] => { 2026-03-17 00:47:00.919674 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-17 00:47:00.919681 | orchestrator | } 2026-03-17 00:47:00.919688 | orchestrator | 2026-03-17 00:47:00.919696 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-17 00:47:00.919707 | orchestrator | Tuesday 17 March 2026 00:46:55 +0000 (0:00:00.139) 0:01:02.161 ********* 2026-03-17 00:47:00.919718 | orchestrator | ok: [testbed-node-5] => { 2026-03-17 00:47:00.919729 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-17 00:47:00.919740 | orchestrator | } 2026-03-17 00:47:00.919753 | orchestrator | 2026-03-17 00:47:00.919764 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-17 00:47:00.919774 | orchestrator | Tuesday 17 March 2026 00:46:55 +0000 (0:00:00.133) 0:01:02.294 ********* 2026-03-17 00:47:00.919787 | orchestrator | ok: [testbed-node-5] => { 2026-03-17 00:47:00.919794 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-17 00:47:00.919800 | orchestrator | } 2026-03-17 00:47:00.919807 | orchestrator | 2026-03-17 00:47:00.919814 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-17 00:47:00.919820 | orchestrator | Tuesday 17 March 2026 00:46:56 +0000 (0:00:00.141) 0:01:02.436 ********* 2026-03-17 00:47:00.919827 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:47:00.919834 | orchestrator | 2026-03-17 00:47:00.919842 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-17 00:47:00.919849 | orchestrator | Tuesday 17 March 2026 00:46:56 +0000 (0:00:00.553) 0:01:02.989 ********* 2026-03-17 00:47:00.919857 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:47:00.919865 | orchestrator | 2026-03-17 00:47:00.919873 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-17 00:47:00.919882 | orchestrator | Tuesday 17 March 2026 00:46:57 +0000 (0:00:00.569) 0:01:03.558 ********* 2026-03-17 00:47:00.919894 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:47:00.919904 | orchestrator | 2026-03-17 00:47:00.919915 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-17 00:47:00.919926 | orchestrator | Tuesday 17 March 2026 00:46:57 +0000 (0:00:00.762) 0:01:04.321 ********* 2026-03-17 00:47:00.919937 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:47:00.919949 | orchestrator | 2026-03-17 00:47:00.919960 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-17 00:47:00.919971 | orchestrator | Tuesday 17 March 2026 00:46:58 +0000 (0:00:00.145) 0:01:04.466 ********* 2026-03-17 00:47:00.919981 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:00.919988 | orchestrator | 2026-03-17 00:47:00.919995 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-17 00:47:00.920008 | orchestrator | Tuesday 17 March 2026 00:46:58 +0000 (0:00:00.122) 0:01:04.589 ********* 2026-03-17 00:47:00.920015 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:00.920022 | orchestrator | 2026-03-17 00:47:00.920028 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-17 00:47:00.920035 | orchestrator | Tuesday 17 March 2026 00:46:58 +0000 (0:00:00.118) 0:01:04.707 ********* 2026-03-17 00:47:00.920042 | orchestrator | ok: [testbed-node-5] => { 2026-03-17 00:47:00.920048 | orchestrator |  "vgs_report": { 2026-03-17 00:47:00.920055 | orchestrator |  "vg": [] 2026-03-17 00:47:00.920083 | orchestrator |  } 2026-03-17 00:47:00.920095 | orchestrator | } 2026-03-17 00:47:00.920106 | orchestrator | 2026-03-17 00:47:00.920117 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-17 00:47:00.920124 | orchestrator | Tuesday 17 March 2026 00:46:58 +0000 (0:00:00.149) 0:01:04.856 ********* 2026-03-17 00:47:00.920130 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:00.920137 | orchestrator | 2026-03-17 00:47:00.920144 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-17 00:47:00.920150 | orchestrator | Tuesday 17 March 2026 00:46:58 +0000 (0:00:00.127) 0:01:04.983 ********* 2026-03-17 00:47:00.920157 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:00.920164 | orchestrator | 2026-03-17 00:47:00.920170 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-17 00:47:00.920177 | orchestrator | Tuesday 17 March 2026 00:46:58 +0000 (0:00:00.124) 0:01:05.107 ********* 2026-03-17 00:47:00.920183 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:00.920190 | orchestrator | 2026-03-17 00:47:00.920196 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-17 00:47:00.920203 | orchestrator | Tuesday 17 March 2026 00:46:58 +0000 (0:00:00.144) 0:01:05.252 ********* 2026-03-17 00:47:00.920210 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:00.920217 | orchestrator | 2026-03-17 00:47:00.920223 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-17 00:47:00.920230 | orchestrator | Tuesday 17 March 2026 00:46:58 +0000 (0:00:00.134) 0:01:05.387 ********* 2026-03-17 00:47:00.920236 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:00.920243 | orchestrator | 2026-03-17 00:47:00.920250 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-17 00:47:00.920256 | orchestrator | Tuesday 17 March 2026 00:46:59 +0000 (0:00:00.125) 0:01:05.513 ********* 2026-03-17 00:47:00.920263 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:00.920270 | orchestrator | 2026-03-17 00:47:00.920290 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-17 00:47:00.920319 | orchestrator | Tuesday 17 March 2026 00:46:59 +0000 (0:00:00.126) 0:01:05.640 ********* 2026-03-17 00:47:00.920326 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:00.920333 | orchestrator | 2026-03-17 00:47:00.920339 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-17 00:47:00.920346 | orchestrator | Tuesday 17 March 2026 00:46:59 +0000 (0:00:00.129) 0:01:05.769 ********* 2026-03-17 00:47:00.920353 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:00.920359 | orchestrator | 2026-03-17 00:47:00.920366 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-17 00:47:00.920373 | orchestrator | Tuesday 17 March 2026 00:46:59 +0000 (0:00:00.307) 0:01:06.076 ********* 2026-03-17 00:47:00.920379 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:00.920386 | orchestrator | 2026-03-17 00:47:00.920396 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-17 00:47:00.920403 | orchestrator | Tuesday 17 March 2026 00:46:59 +0000 (0:00:00.131) 0:01:06.208 ********* 2026-03-17 00:47:00.920410 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:00.920417 | orchestrator | 2026-03-17 00:47:00.920423 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-17 00:47:00.920430 | orchestrator | Tuesday 17 March 2026 00:46:59 +0000 (0:00:00.131) 0:01:06.339 ********* 2026-03-17 00:47:00.920442 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:00.920451 | orchestrator | 2026-03-17 00:47:00.920461 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-17 00:47:00.920472 | orchestrator | Tuesday 17 March 2026 00:47:00 +0000 (0:00:00.127) 0:01:06.467 ********* 2026-03-17 00:47:00.920483 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:00.920494 | orchestrator | 2026-03-17 00:47:00.920506 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-17 00:47:00.920517 | orchestrator | Tuesday 17 March 2026 00:47:00 +0000 (0:00:00.135) 0:01:06.602 ********* 2026-03-17 00:47:00.920527 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:00.920533 | orchestrator | 2026-03-17 00:47:00.920540 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-17 00:47:00.920547 | orchestrator | Tuesday 17 March 2026 00:47:00 +0000 (0:00:00.134) 0:01:06.736 ********* 2026-03-17 00:47:00.920553 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:00.920560 | orchestrator | 2026-03-17 00:47:00.920566 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-17 00:47:00.920573 | orchestrator | Tuesday 17 March 2026 00:47:00 +0000 (0:00:00.126) 0:01:06.863 ********* 2026-03-17 00:47:00.920579 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0', 'data_vg': 'ceph-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0'})  2026-03-17 00:47:00.920586 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc85b6b7-69fe-55db-81a6-3a78775dfc6c', 'data_vg': 'ceph-bc85b6b7-69fe-55db-81a6-3a78775dfc6c'})  2026-03-17 00:47:00.920593 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:00.920600 | orchestrator | 2026-03-17 00:47:00.920606 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-17 00:47:00.920613 | orchestrator | Tuesday 17 March 2026 00:47:00 +0000 (0:00:00.152) 0:01:07.015 ********* 2026-03-17 00:47:00.920620 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0', 'data_vg': 'ceph-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0'})  2026-03-17 00:47:00.920626 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc85b6b7-69fe-55db-81a6-3a78775dfc6c', 'data_vg': 'ceph-bc85b6b7-69fe-55db-81a6-3a78775dfc6c'})  2026-03-17 00:47:00.920633 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:00.920640 | orchestrator | 2026-03-17 00:47:00.920646 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-17 00:47:00.920653 | orchestrator | Tuesday 17 March 2026 00:47:00 +0000 (0:00:00.154) 0:01:07.169 ********* 2026-03-17 00:47:00.920666 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0', 'data_vg': 'ceph-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0'})  2026-03-17 00:47:03.896010 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc85b6b7-69fe-55db-81a6-3a78775dfc6c', 'data_vg': 'ceph-bc85b6b7-69fe-55db-81a6-3a78775dfc6c'})  2026-03-17 00:47:03.896111 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:03.896127 | orchestrator | 2026-03-17 00:47:03.896140 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-17 00:47:03.896153 | orchestrator | Tuesday 17 March 2026 00:47:00 +0000 (0:00:00.169) 0:01:07.338 ********* 2026-03-17 00:47:03.896165 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0', 'data_vg': 'ceph-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0'})  2026-03-17 00:47:03.896176 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc85b6b7-69fe-55db-81a6-3a78775dfc6c', 'data_vg': 'ceph-bc85b6b7-69fe-55db-81a6-3a78775dfc6c'})  2026-03-17 00:47:03.896187 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:03.896198 | orchestrator | 2026-03-17 00:47:03.896209 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-17 00:47:03.896220 | orchestrator | Tuesday 17 March 2026 00:47:01 +0000 (0:00:00.152) 0:01:07.491 ********* 2026-03-17 00:47:03.896271 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0', 'data_vg': 'ceph-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0'})  2026-03-17 00:47:03.896283 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc85b6b7-69fe-55db-81a6-3a78775dfc6c', 'data_vg': 'ceph-bc85b6b7-69fe-55db-81a6-3a78775dfc6c'})  2026-03-17 00:47:03.896294 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:03.896361 | orchestrator | 2026-03-17 00:47:03.896374 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-17 00:47:03.896385 | orchestrator | Tuesday 17 March 2026 00:47:01 +0000 (0:00:00.150) 0:01:07.641 ********* 2026-03-17 00:47:03.896396 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0', 'data_vg': 'ceph-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0'})  2026-03-17 00:47:03.896408 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc85b6b7-69fe-55db-81a6-3a78775dfc6c', 'data_vg': 'ceph-bc85b6b7-69fe-55db-81a6-3a78775dfc6c'})  2026-03-17 00:47:03.896433 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:03.896444 | orchestrator | 2026-03-17 00:47:03.896455 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-17 00:47:03.896466 | orchestrator | Tuesday 17 March 2026 00:47:01 +0000 (0:00:00.343) 0:01:07.985 ********* 2026-03-17 00:47:03.896477 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0', 'data_vg': 'ceph-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0'})  2026-03-17 00:47:03.896488 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc85b6b7-69fe-55db-81a6-3a78775dfc6c', 'data_vg': 'ceph-bc85b6b7-69fe-55db-81a6-3a78775dfc6c'})  2026-03-17 00:47:03.896499 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:03.896510 | orchestrator | 2026-03-17 00:47:03.896522 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-17 00:47:03.896533 | orchestrator | Tuesday 17 March 2026 00:47:01 +0000 (0:00:00.160) 0:01:08.145 ********* 2026-03-17 00:47:03.896544 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0', 'data_vg': 'ceph-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0'})  2026-03-17 00:47:03.896558 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc85b6b7-69fe-55db-81a6-3a78775dfc6c', 'data_vg': 'ceph-bc85b6b7-69fe-55db-81a6-3a78775dfc6c'})  2026-03-17 00:47:03.896571 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:03.896583 | orchestrator | 2026-03-17 00:47:03.896596 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-17 00:47:03.896614 | orchestrator | Tuesday 17 March 2026 00:47:01 +0000 (0:00:00.157) 0:01:08.303 ********* 2026-03-17 00:47:03.896634 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:47:03.896656 | orchestrator | 2026-03-17 00:47:03.896676 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-17 00:47:03.896697 | orchestrator | Tuesday 17 March 2026 00:47:02 +0000 (0:00:00.544) 0:01:08.847 ********* 2026-03-17 00:47:03.896717 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:47:03.896736 | orchestrator | 2026-03-17 00:47:03.896756 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-17 00:47:03.896777 | orchestrator | Tuesday 17 March 2026 00:47:02 +0000 (0:00:00.554) 0:01:09.401 ********* 2026-03-17 00:47:03.896797 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:47:03.896818 | orchestrator | 2026-03-17 00:47:03.896839 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-17 00:47:03.896859 | orchestrator | Tuesday 17 March 2026 00:47:03 +0000 (0:00:00.149) 0:01:09.551 ********* 2026-03-17 00:47:03.896879 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0', 'vg_name': 'ceph-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0'}) 2026-03-17 00:47:03.896900 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-bc85b6b7-69fe-55db-81a6-3a78775dfc6c', 'vg_name': 'ceph-bc85b6b7-69fe-55db-81a6-3a78775dfc6c'}) 2026-03-17 00:47:03.896933 | orchestrator | 2026-03-17 00:47:03.896954 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-17 00:47:03.896975 | orchestrator | Tuesday 17 March 2026 00:47:03 +0000 (0:00:00.153) 0:01:09.704 ********* 2026-03-17 00:47:03.897019 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0', 'data_vg': 'ceph-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0'})  2026-03-17 00:47:03.897039 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc85b6b7-69fe-55db-81a6-3a78775dfc6c', 'data_vg': 'ceph-bc85b6b7-69fe-55db-81a6-3a78775dfc6c'})  2026-03-17 00:47:03.897058 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:03.897077 | orchestrator | 2026-03-17 00:47:03.897096 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-17 00:47:03.897114 | orchestrator | Tuesday 17 March 2026 00:47:03 +0000 (0:00:00.149) 0:01:09.853 ********* 2026-03-17 00:47:03.897133 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0', 'data_vg': 'ceph-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0'})  2026-03-17 00:47:03.897153 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc85b6b7-69fe-55db-81a6-3a78775dfc6c', 'data_vg': 'ceph-bc85b6b7-69fe-55db-81a6-3a78775dfc6c'})  2026-03-17 00:47:03.897172 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:03.897192 | orchestrator | 2026-03-17 00:47:03.897209 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-17 00:47:03.897228 | orchestrator | Tuesday 17 March 2026 00:47:03 +0000 (0:00:00.147) 0:01:10.001 ********* 2026-03-17 00:47:03.897247 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0', 'data_vg': 'ceph-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0'})  2026-03-17 00:47:03.897266 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc85b6b7-69fe-55db-81a6-3a78775dfc6c', 'data_vg': 'ceph-bc85b6b7-69fe-55db-81a6-3a78775dfc6c'})  2026-03-17 00:47:03.897284 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:03.897303 | orchestrator | 2026-03-17 00:47:03.897350 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-17 00:47:03.897369 | orchestrator | Tuesday 17 March 2026 00:47:03 +0000 (0:00:00.138) 0:01:10.139 ********* 2026-03-17 00:47:03.897387 | orchestrator | ok: [testbed-node-5] => { 2026-03-17 00:47:03.897405 | orchestrator |  "lvm_report": { 2026-03-17 00:47:03.897425 | orchestrator |  "lv": [ 2026-03-17 00:47:03.897443 | orchestrator |  { 2026-03-17 00:47:03.897461 | orchestrator |  "lv_name": "osd-block-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0", 2026-03-17 00:47:03.897489 | orchestrator |  "vg_name": "ceph-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0" 2026-03-17 00:47:03.897508 | orchestrator |  }, 2026-03-17 00:47:03.897525 | orchestrator |  { 2026-03-17 00:47:03.897542 | orchestrator |  "lv_name": "osd-block-bc85b6b7-69fe-55db-81a6-3a78775dfc6c", 2026-03-17 00:47:03.897559 | orchestrator |  "vg_name": "ceph-bc85b6b7-69fe-55db-81a6-3a78775dfc6c" 2026-03-17 00:47:03.897577 | orchestrator |  } 2026-03-17 00:47:03.897596 | orchestrator |  ], 2026-03-17 00:47:03.897614 | orchestrator |  "pv": [ 2026-03-17 00:47:03.897632 | orchestrator |  { 2026-03-17 00:47:03.897650 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-17 00:47:03.897668 | orchestrator |  "vg_name": "ceph-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0" 2026-03-17 00:47:03.897687 | orchestrator |  }, 2026-03-17 00:47:03.897705 | orchestrator |  { 2026-03-17 00:47:03.897724 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-17 00:47:03.897742 | orchestrator |  "vg_name": "ceph-bc85b6b7-69fe-55db-81a6-3a78775dfc6c" 2026-03-17 00:47:03.897761 | orchestrator |  } 2026-03-17 00:47:03.897780 | orchestrator |  ] 2026-03-17 00:47:03.897797 | orchestrator |  } 2026-03-17 00:47:03.897816 | orchestrator | } 2026-03-17 00:47:03.897846 | orchestrator | 2026-03-17 00:47:03.897858 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:47:03.897868 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-17 00:47:03.897880 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-17 00:47:03.897891 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-17 00:47:03.897902 | orchestrator | 2026-03-17 00:47:03.897912 | orchestrator | 2026-03-17 00:47:03.897923 | orchestrator | 2026-03-17 00:47:03.897934 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:47:03.897945 | orchestrator | Tuesday 17 March 2026 00:47:03 +0000 (0:00:00.154) 0:01:10.294 ********* 2026-03-17 00:47:03.897956 | orchestrator | =============================================================================== 2026-03-17 00:47:03.897966 | orchestrator | Create block VGs -------------------------------------------------------- 5.67s 2026-03-17 00:47:03.897977 | orchestrator | Create block LVs -------------------------------------------------------- 4.20s 2026-03-17 00:47:03.897988 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.79s 2026-03-17 00:47:03.897999 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.73s 2026-03-17 00:47:03.898009 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.61s 2026-03-17 00:47:03.898087 | orchestrator | Add known partitions to the list of available block devices ------------- 1.60s 2026-03-17 00:47:03.898099 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.60s 2026-03-17 00:47:03.898110 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.57s 2026-03-17 00:47:03.898168 | orchestrator | Add known links to the list of available block devices ------------------ 1.26s 2026-03-17 00:47:04.260246 | orchestrator | Print LVM report data --------------------------------------------------- 0.97s 2026-03-17 00:47:04.260379 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2026-03-17 00:47:04.260405 | orchestrator | Add known partitions to the list of available block devices ------------- 0.83s 2026-03-17 00:47:04.260427 | orchestrator | Add known links to the list of available block devices ------------------ 0.79s 2026-03-17 00:47:04.260447 | orchestrator | Add known partitions to the list of available block devices ------------- 0.75s 2026-03-17 00:47:04.260466 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.69s 2026-03-17 00:47:04.260487 | orchestrator | Get initial list of available block devices ----------------------------- 0.68s 2026-03-17 00:47:04.260510 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2026-03-17 00:47:04.260531 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.65s 2026-03-17 00:47:04.260551 | orchestrator | Create dict of block VGs -> PVs from ceph_osd_devices ------------------- 0.65s 2026-03-17 00:47:04.260571 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.65s 2026-03-17 00:47:16.506882 | orchestrator | 2026-03-17 00:47:16 | INFO  | Task bdff87cb-f976-4c4c-b290-83fe2d22fc66 (facts) was prepared for execution. 2026-03-17 00:47:16.506987 | orchestrator | 2026-03-17 00:47:16 | INFO  | It takes a moment until task bdff87cb-f976-4c4c-b290-83fe2d22fc66 (facts) has been started and output is visible here. 2026-03-17 00:47:28.722746 | orchestrator | 2026-03-17 00:47:28.722861 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-17 00:47:28.722878 | orchestrator | 2026-03-17 00:47:28.722890 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-17 00:47:28.722901 | orchestrator | Tuesday 17 March 2026 00:47:20 +0000 (0:00:00.250) 0:00:00.250 ********* 2026-03-17 00:47:28.722938 | orchestrator | ok: [testbed-manager] 2026-03-17 00:47:28.722951 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:47:28.722962 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:47:28.722972 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:47:28.722983 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:47:28.722993 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:47:28.723004 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:47:28.723015 | orchestrator | 2026-03-17 00:47:28.723026 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-17 00:47:28.723052 | orchestrator | Tuesday 17 March 2026 00:47:21 +0000 (0:00:01.078) 0:00:01.329 ********* 2026-03-17 00:47:28.723065 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:47:28.723076 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:47:28.723087 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:47:28.723098 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:47:28.723109 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:47:28.723119 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:47:28.723130 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:28.723141 | orchestrator | 2026-03-17 00:47:28.723152 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-17 00:47:28.723162 | orchestrator | 2026-03-17 00:47:28.723173 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-17 00:47:28.723184 | orchestrator | Tuesday 17 March 2026 00:47:22 +0000 (0:00:01.041) 0:00:02.370 ********* 2026-03-17 00:47:28.723194 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:47:28.723205 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:47:28.723216 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:47:28.723226 | orchestrator | ok: [testbed-manager] 2026-03-17 00:47:28.723237 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:47:28.723248 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:47:28.723258 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:47:28.723269 | orchestrator | 2026-03-17 00:47:28.723280 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-17 00:47:28.723293 | orchestrator | 2026-03-17 00:47:28.723306 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-17 00:47:28.723319 | orchestrator | Tuesday 17 March 2026 00:47:27 +0000 (0:00:05.085) 0:00:07.455 ********* 2026-03-17 00:47:28.723331 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:47:28.723343 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:47:28.723356 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:47:28.723368 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:47:28.723380 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:47:28.723425 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:47:28.723444 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:28.723463 | orchestrator | 2026-03-17 00:47:28.723485 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:47:28.723499 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:47:28.723512 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:47:28.723525 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:47:28.723537 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:47:28.723549 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:47:28.723562 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:47:28.723574 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:47:28.723595 | orchestrator | 2026-03-17 00:47:28.723607 | orchestrator | 2026-03-17 00:47:28.723620 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:47:28.723633 | orchestrator | Tuesday 17 March 2026 00:47:28 +0000 (0:00:00.496) 0:00:07.952 ********* 2026-03-17 00:47:28.723646 | orchestrator | =============================================================================== 2026-03-17 00:47:28.723656 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.09s 2026-03-17 00:47:28.723667 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.08s 2026-03-17 00:47:28.723677 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.04s 2026-03-17 00:47:28.723688 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2026-03-17 00:47:40.960228 | orchestrator | 2026-03-17 00:47:40 | INFO  | Task b9d7410a-201b-4378-87ba-8184586a8cb0 (frr) was prepared for execution. 2026-03-17 00:47:40.960330 | orchestrator | 2026-03-17 00:47:40 | INFO  | It takes a moment until task b9d7410a-201b-4378-87ba-8184586a8cb0 (frr) has been started and output is visible here. 2026-03-17 00:48:04.010242 | orchestrator | 2026-03-17 00:48:04.010326 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-17 00:48:04.010337 | orchestrator | 2026-03-17 00:48:04.010345 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-17 00:48:04.010352 | orchestrator | Tuesday 17 March 2026 00:47:44 +0000 (0:00:00.166) 0:00:00.166 ********* 2026-03-17 00:48:04.010359 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-17 00:48:04.010367 | orchestrator | 2026-03-17 00:48:04.010373 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-17 00:48:04.010380 | orchestrator | Tuesday 17 March 2026 00:47:44 +0000 (0:00:00.168) 0:00:00.334 ********* 2026-03-17 00:48:04.010386 | orchestrator | changed: [testbed-manager] 2026-03-17 00:48:04.010393 | orchestrator | 2026-03-17 00:48:04.010400 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-17 00:48:04.010406 | orchestrator | Tuesday 17 March 2026 00:47:45 +0000 (0:00:00.955) 0:00:01.289 ********* 2026-03-17 00:48:04.010413 | orchestrator | changed: [testbed-manager] 2026-03-17 00:48:04.010419 | orchestrator | 2026-03-17 00:48:04.010426 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-17 00:48:04.010432 | orchestrator | Tuesday 17 March 2026 00:47:54 +0000 (0:00:08.551) 0:00:09.841 ********* 2026-03-17 00:48:04.010439 | orchestrator | ok: [testbed-manager] 2026-03-17 00:48:04.010446 | orchestrator | 2026-03-17 00:48:04.010452 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-17 00:48:04.010459 | orchestrator | Tuesday 17 March 2026 00:47:55 +0000 (0:00:00.972) 0:00:10.813 ********* 2026-03-17 00:48:04.010465 | orchestrator | changed: [testbed-manager] 2026-03-17 00:48:04.010471 | orchestrator | 2026-03-17 00:48:04.010477 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-17 00:48:04.010484 | orchestrator | Tuesday 17 March 2026 00:47:56 +0000 (0:00:00.939) 0:00:11.753 ********* 2026-03-17 00:48:04.010566 | orchestrator | ok: [testbed-manager] 2026-03-17 00:48:04.010575 | orchestrator | 2026-03-17 00:48:04.010582 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-17 00:48:04.010589 | orchestrator | Tuesday 17 March 2026 00:47:57 +0000 (0:00:01.137) 0:00:12.890 ********* 2026-03-17 00:48:04.010599 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:48:04.010609 | orchestrator | 2026-03-17 00:48:04.010620 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-17 00:48:04.010629 | orchestrator | Tuesday 17 March 2026 00:47:57 +0000 (0:00:00.145) 0:00:13.036 ********* 2026-03-17 00:48:04.010660 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:48:04.010684 | orchestrator | 2026-03-17 00:48:04.010691 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-17 00:48:04.010697 | orchestrator | Tuesday 17 March 2026 00:47:57 +0000 (0:00:00.149) 0:00:13.186 ********* 2026-03-17 00:48:04.010703 | orchestrator | changed: [testbed-manager] 2026-03-17 00:48:04.010710 | orchestrator | 2026-03-17 00:48:04.010716 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-17 00:48:04.010722 | orchestrator | Tuesday 17 March 2026 00:47:58 +0000 (0:00:00.945) 0:00:14.131 ********* 2026-03-17 00:48:04.010729 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-17 00:48:04.010735 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-17 00:48:04.010743 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-17 00:48:04.010749 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-17 00:48:04.010756 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-17 00:48:04.010762 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-17 00:48:04.010768 | orchestrator | 2026-03-17 00:48:04.010775 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-17 00:48:04.010781 | orchestrator | Tuesday 17 March 2026 00:48:00 +0000 (0:00:02.126) 0:00:16.258 ********* 2026-03-17 00:48:04.010787 | orchestrator | ok: [testbed-manager] 2026-03-17 00:48:04.010793 | orchestrator | 2026-03-17 00:48:04.010799 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-17 00:48:04.010806 | orchestrator | Tuesday 17 March 2026 00:48:02 +0000 (0:00:01.556) 0:00:17.814 ********* 2026-03-17 00:48:04.010812 | orchestrator | changed: [testbed-manager] 2026-03-17 00:48:04.010818 | orchestrator | 2026-03-17 00:48:04.010824 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:48:04.010831 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:48:04.010838 | orchestrator | 2026-03-17 00:48:04.010844 | orchestrator | 2026-03-17 00:48:04.010850 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:48:04.010857 | orchestrator | Tuesday 17 March 2026 00:48:03 +0000 (0:00:01.339) 0:00:19.153 ********* 2026-03-17 00:48:04.010863 | orchestrator | =============================================================================== 2026-03-17 00:48:04.010869 | orchestrator | osism.services.frr : Install frr package -------------------------------- 8.55s 2026-03-17 00:48:04.010875 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.13s 2026-03-17 00:48:04.010881 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.56s 2026-03-17 00:48:04.010887 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.34s 2026-03-17 00:48:04.010894 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.14s 2026-03-17 00:48:04.010914 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.97s 2026-03-17 00:48:04.010921 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 0.96s 2026-03-17 00:48:04.010927 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.95s 2026-03-17 00:48:04.010934 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.94s 2026-03-17 00:48:04.010940 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.17s 2026-03-17 00:48:04.010946 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.15s 2026-03-17 00:48:04.010952 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.15s 2026-03-17 00:48:04.277370 | orchestrator | 2026-03-17 00:48:04.279356 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Tue Mar 17 00:48:04 UTC 2026 2026-03-17 00:48:04.279424 | orchestrator | 2026-03-17 00:48:06.169479 | orchestrator | 2026-03-17 00:48:06 | INFO  | Collection nutshell is prepared for execution 2026-03-17 00:48:06.169674 | orchestrator | 2026-03-17 00:48:06 | INFO  | A [0] - dotfiles 2026-03-17 00:48:16.193617 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [0] - homer 2026-03-17 00:48:16.193695 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [0] - netdata 2026-03-17 00:48:16.193702 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [0] - openstackclient 2026-03-17 00:48:16.193708 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [0] - phpmyadmin 2026-03-17 00:48:16.193713 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [0] - common 2026-03-17 00:48:16.198386 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [1] -- loadbalancer 2026-03-17 00:48:16.198473 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [2] --- opensearch 2026-03-17 00:48:16.198487 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [2] --- mariadb-ng 2026-03-17 00:48:16.199185 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [3] ---- horizon 2026-03-17 00:48:16.199220 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [3] ---- keystone 2026-03-17 00:48:16.199238 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [4] ----- neutron 2026-03-17 00:48:16.199256 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [5] ------ wait-for-nova 2026-03-17 00:48:16.199406 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [6] ------- octavia 2026-03-17 00:48:16.201291 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [4] ----- barbican 2026-03-17 00:48:16.201338 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [4] ----- designate 2026-03-17 00:48:16.201348 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [4] ----- ironic 2026-03-17 00:48:16.201626 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [4] ----- placement 2026-03-17 00:48:16.201647 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [4] ----- magnum 2026-03-17 00:48:16.202652 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [1] -- openvswitch 2026-03-17 00:48:16.202692 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [2] --- ovn 2026-03-17 00:48:16.202827 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [1] -- memcached 2026-03-17 00:48:16.203132 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [1] -- redis 2026-03-17 00:48:16.203153 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [1] -- rabbitmq-ng 2026-03-17 00:48:16.203501 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [0] - kubernetes 2026-03-17 00:48:16.206239 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [1] -- kubeconfig 2026-03-17 00:48:16.206317 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [1] -- copy-kubeconfig 2026-03-17 00:48:16.206330 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [0] - ceph 2026-03-17 00:48:16.208626 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [1] -- ceph-pools 2026-03-17 00:48:16.208875 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [2] --- copy-ceph-keys 2026-03-17 00:48:16.208890 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [3] ---- cephclient 2026-03-17 00:48:16.208896 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-03-17 00:48:16.208902 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [4] ----- wait-for-keystone 2026-03-17 00:48:16.209026 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [5] ------ kolla-ceph-rgw 2026-03-17 00:48:16.209041 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [5] ------ glance 2026-03-17 00:48:16.209051 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [5] ------ cinder 2026-03-17 00:48:16.209190 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [5] ------ nova 2026-03-17 00:48:16.209836 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [4] ----- prometheus 2026-03-17 00:48:16.209894 | orchestrator | 2026-03-17 00:48:16 | INFO  | A [5] ------ grafana 2026-03-17 00:48:16.392919 | orchestrator | 2026-03-17 00:48:16 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-03-17 00:48:16.392996 | orchestrator | 2026-03-17 00:48:16 | INFO  | Tasks are running in the background 2026-03-17 00:48:19.081703 | orchestrator | 2026-03-17 00:48:19 | INFO  | No task IDs specified, wait for all currently running tasks 2026-03-17 00:48:21.189106 | orchestrator | 2026-03-17 00:48:21 | INFO  | Task dd642824-f8eb-41da-b6b9-b5841a44d679 is in state STARTED 2026-03-17 00:48:21.189210 | orchestrator | 2026-03-17 00:48:21 | INFO  | Task d2669867-12ee-460e-a174-4ea948540269 is in state STARTED 2026-03-17 00:48:21.191805 | orchestrator | 2026-03-17 00:48:21 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:48:21.192109 | orchestrator | 2026-03-17 00:48:21 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:48:21.192531 | orchestrator | 2026-03-17 00:48:21 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:48:21.193183 | orchestrator | 2026-03-17 00:48:21 | INFO  | Task 2846e3e2-4b1e-498e-8a9b-d19693452662 is in state STARTED 2026-03-17 00:48:21.195851 | orchestrator | 2026-03-17 00:48:21 | INFO  | Task 1e48b722-57d3-4817-987d-52a2734d7b95 is in state STARTED 2026-03-17 00:48:21.195903 | orchestrator | 2026-03-17 00:48:21 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:48:24.251189 | orchestrator | 2026-03-17 00:48:24 | INFO  | Task dd642824-f8eb-41da-b6b9-b5841a44d679 is in state STARTED 2026-03-17 00:48:24.252121 | orchestrator | 2026-03-17 00:48:24 | INFO  | Task d2669867-12ee-460e-a174-4ea948540269 is in state STARTED 2026-03-17 00:48:24.252174 | orchestrator | 2026-03-17 00:48:24 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:48:24.255786 | orchestrator | 2026-03-17 00:48:24 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:48:24.255836 | orchestrator | 2026-03-17 00:48:24 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:48:24.255844 | orchestrator | 2026-03-17 00:48:24 | INFO  | Task 2846e3e2-4b1e-498e-8a9b-d19693452662 is in state STARTED 2026-03-17 00:48:24.255851 | orchestrator | 2026-03-17 00:48:24 | INFO  | Task 1e48b722-57d3-4817-987d-52a2734d7b95 is in state STARTED 2026-03-17 00:48:24.255858 | orchestrator | 2026-03-17 00:48:24 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:48:27.292844 | orchestrator | 2026-03-17 00:48:27 | INFO  | Task dd642824-f8eb-41da-b6b9-b5841a44d679 is in state STARTED 2026-03-17 00:48:27.292952 | orchestrator | 2026-03-17 00:48:27 | INFO  | Task d2669867-12ee-460e-a174-4ea948540269 is in state STARTED 2026-03-17 00:48:27.292982 | orchestrator | 2026-03-17 00:48:27 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:48:27.293422 | orchestrator | 2026-03-17 00:48:27 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:48:27.293808 | orchestrator | 2026-03-17 00:48:27 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:48:27.296085 | orchestrator | 2026-03-17 00:48:27 | INFO  | Task 2846e3e2-4b1e-498e-8a9b-d19693452662 is in state STARTED 2026-03-17 00:48:27.297438 | orchestrator | 2026-03-17 00:48:27 | INFO  | Task 1e48b722-57d3-4817-987d-52a2734d7b95 is in state STARTED 2026-03-17 00:48:27.297494 | orchestrator | 2026-03-17 00:48:27 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:48:30.397863 | orchestrator | 2026-03-17 00:48:30 | INFO  | Task dd642824-f8eb-41da-b6b9-b5841a44d679 is in state STARTED 2026-03-17 00:48:30.397937 | orchestrator | 2026-03-17 00:48:30 | INFO  | Task d2669867-12ee-460e-a174-4ea948540269 is in state STARTED 2026-03-17 00:48:30.397945 | orchestrator | 2026-03-17 00:48:30 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:48:30.397951 | orchestrator | 2026-03-17 00:48:30 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:48:30.397957 | orchestrator | 2026-03-17 00:48:30 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:48:30.397963 | orchestrator | 2026-03-17 00:48:30 | INFO  | Task 2846e3e2-4b1e-498e-8a9b-d19693452662 is in state STARTED 2026-03-17 00:48:30.397968 | orchestrator | 2026-03-17 00:48:30 | INFO  | Task 1e48b722-57d3-4817-987d-52a2734d7b95 is in state STARTED 2026-03-17 00:48:30.397974 | orchestrator | 2026-03-17 00:48:30 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:48:33.649879 | orchestrator | 2026-03-17 00:48:33 | INFO  | Task dd642824-f8eb-41da-b6b9-b5841a44d679 is in state STARTED 2026-03-17 00:48:33.649950 | orchestrator | 2026-03-17 00:48:33 | INFO  | Task d2669867-12ee-460e-a174-4ea948540269 is in state STARTED 2026-03-17 00:48:33.649956 | orchestrator | 2026-03-17 00:48:33 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:48:33.649961 | orchestrator | 2026-03-17 00:48:33 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:48:33.649965 | orchestrator | 2026-03-17 00:48:33 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:48:33.649969 | orchestrator | 2026-03-17 00:48:33 | INFO  | Task 2846e3e2-4b1e-498e-8a9b-d19693452662 is in state STARTED 2026-03-17 00:48:33.649973 | orchestrator | 2026-03-17 00:48:33 | INFO  | Task 1e48b722-57d3-4817-987d-52a2734d7b95 is in state STARTED 2026-03-17 00:48:33.649977 | orchestrator | 2026-03-17 00:48:33 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:48:36.774995 | orchestrator | 2026-03-17 00:48:36 | INFO  | Task dd642824-f8eb-41da-b6b9-b5841a44d679 is in state STARTED 2026-03-17 00:48:36.775110 | orchestrator | 2026-03-17 00:48:36 | INFO  | Task d2669867-12ee-460e-a174-4ea948540269 is in state STARTED 2026-03-17 00:48:36.775126 | orchestrator | 2026-03-17 00:48:36 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:48:36.775136 | orchestrator | 2026-03-17 00:48:36 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:48:36.775144 | orchestrator | 2026-03-17 00:48:36 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:48:36.775153 | orchestrator | 2026-03-17 00:48:36 | INFO  | Task 2846e3e2-4b1e-498e-8a9b-d19693452662 is in state STARTED 2026-03-17 00:48:36.775162 | orchestrator | 2026-03-17 00:48:36 | INFO  | Task 1e48b722-57d3-4817-987d-52a2734d7b95 is in state STARTED 2026-03-17 00:48:36.775171 | orchestrator | 2026-03-17 00:48:36 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:48:39.771487 | orchestrator | 2026-03-17 00:48:39.771657 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-03-17 00:48:39.771675 | orchestrator | 2026-03-17 00:48:39.771688 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-03-17 00:48:39.771699 | orchestrator | Tuesday 17 March 2026 00:48:27 +0000 (0:00:00.597) 0:00:00.597 ********* 2026-03-17 00:48:39.771732 | orchestrator | changed: [testbed-manager] 2026-03-17 00:48:39.771745 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:48:39.771756 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:48:39.771767 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:48:39.771778 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:48:39.771789 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:48:39.771800 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:48:39.771810 | orchestrator | 2026-03-17 00:48:39.771821 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-03-17 00:48:39.771832 | orchestrator | Tuesday 17 March 2026 00:48:31 +0000 (0:00:03.740) 0:00:04.338 ********* 2026-03-17 00:48:39.771844 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-17 00:48:39.771856 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-17 00:48:39.771867 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-17 00:48:39.771878 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-17 00:48:39.771889 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-17 00:48:39.771899 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-17 00:48:39.771910 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-17 00:48:39.771921 | orchestrator | 2026-03-17 00:48:39.771932 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-03-17 00:48:39.771944 | orchestrator | Tuesday 17 March 2026 00:48:32 +0000 (0:00:01.499) 0:00:05.838 ********* 2026-03-17 00:48:39.771960 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-17 00:48:31.930956', 'end': '2026-03-17 00:48:31.937728', 'delta': '0:00:00.006772', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-17 00:48:39.771976 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-17 00:48:31.998804', 'end': '2026-03-17 00:48:32.005178', 'delta': '0:00:00.006374', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-17 00:48:39.771999 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-17 00:48:31.847881', 'end': '2026-03-17 00:48:31.852323', 'delta': '0:00:00.004442', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-17 00:48:39.772051 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-17 00:48:32.134945', 'end': '2026-03-17 00:48:32.142113', 'delta': '0:00:00.007168', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-17 00:48:39.772066 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-17 00:48:32.052539', 'end': '2026-03-17 00:48:32.061004', 'delta': '0:00:00.008465', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-17 00:48:39.772079 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-17 00:48:32.235893', 'end': '2026-03-17 00:48:32.243130', 'delta': '0:00:00.007237', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-17 00:48:39.772092 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-17 00:48:32.307888', 'end': '2026-03-17 00:48:32.314201', 'delta': '0:00:00.006313', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-17 00:48:39.772105 | orchestrator | 2026-03-17 00:48:39.772117 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-03-17 00:48:39.772130 | orchestrator | Tuesday 17 March 2026 00:48:34 +0000 (0:00:01.336) 0:00:07.174 ********* 2026-03-17 00:48:39.772163 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-17 00:48:39.772176 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-17 00:48:39.772200 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-17 00:48:39.772213 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-17 00:48:39.772228 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-17 00:48:39.772265 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-17 00:48:39.772296 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-17 00:48:39.772316 | orchestrator | 2026-03-17 00:48:39.772335 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-03-17 00:48:39.772354 | orchestrator | Tuesday 17 March 2026 00:48:36 +0000 (0:00:01.965) 0:00:09.140 ********* 2026-03-17 00:48:39.772371 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-03-17 00:48:39.772391 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-03-17 00:48:39.772412 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-03-17 00:48:39.772431 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-03-17 00:48:39.772451 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-03-17 00:48:39.772469 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-03-17 00:48:39.772487 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-03-17 00:48:39.772506 | orchestrator | 2026-03-17 00:48:39.772527 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:48:39.772906 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:48:39.772927 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:48:39.772939 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:48:39.772950 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:48:39.772961 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:48:39.772972 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:48:39.772983 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:48:39.772994 | orchestrator | 2026-03-17 00:48:39.773005 | orchestrator | 2026-03-17 00:48:39.773016 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:48:39.773027 | orchestrator | Tuesday 17 March 2026 00:48:38 +0000 (0:00:02.677) 0:00:11.818 ********* 2026-03-17 00:48:39.773037 | orchestrator | =============================================================================== 2026-03-17 00:48:39.773048 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.74s 2026-03-17 00:48:39.773059 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.68s 2026-03-17 00:48:39.773070 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.97s 2026-03-17 00:48:39.773081 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.50s 2026-03-17 00:48:39.773091 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.34s 2026-03-17 00:48:39.773102 | orchestrator | 2026-03-17 00:48:39 | INFO  | Task dd642824-f8eb-41da-b6b9-b5841a44d679 is in state STARTED 2026-03-17 00:48:39.773113 | orchestrator | 2026-03-17 00:48:39 | INFO  | Task d2669867-12ee-460e-a174-4ea948540269 is in state SUCCESS 2026-03-17 00:48:39.773124 | orchestrator | 2026-03-17 00:48:39 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:48:39.773135 | orchestrator | 2026-03-17 00:48:39 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:48:39.773146 | orchestrator | 2026-03-17 00:48:39 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:48:39.773168 | orchestrator | 2026-03-17 00:48:39 | INFO  | Task 2846e3e2-4b1e-498e-8a9b-d19693452662 is in state STARTED 2026-03-17 00:48:39.773179 | orchestrator | 2026-03-17 00:48:39 | INFO  | Task 1e48b722-57d3-4817-987d-52a2734d7b95 is in state STARTED 2026-03-17 00:48:39.773190 | orchestrator | 2026-03-17 00:48:39 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:48:42.935890 | orchestrator | 2026-03-17 00:48:42 | INFO  | Task dd642824-f8eb-41da-b6b9-b5841a44d679 is in state STARTED 2026-03-17 00:48:42.936020 | orchestrator | 2026-03-17 00:48:42 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:48:42.936036 | orchestrator | 2026-03-17 00:48:42 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:48:42.936045 | orchestrator | 2026-03-17 00:48:42 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:48:42.936054 | orchestrator | 2026-03-17 00:48:42 | INFO  | Task a4aff4c7-20d6-4834-a6f6-5e5571d6d7b0 is in state STARTED 2026-03-17 00:48:42.936064 | orchestrator | 2026-03-17 00:48:42 | INFO  | Task 2846e3e2-4b1e-498e-8a9b-d19693452662 is in state STARTED 2026-03-17 00:48:42.936073 | orchestrator | 2026-03-17 00:48:42 | INFO  | Task 1e48b722-57d3-4817-987d-52a2734d7b95 is in state STARTED 2026-03-17 00:48:42.936083 | orchestrator | 2026-03-17 00:48:42 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:48:45.873434 | orchestrator | 2026-03-17 00:48:45 | INFO  | Task dd642824-f8eb-41da-b6b9-b5841a44d679 is in state STARTED 2026-03-17 00:48:45.873523 | orchestrator | 2026-03-17 00:48:45 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:48:45.873533 | orchestrator | 2026-03-17 00:48:45 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:48:45.873541 | orchestrator | 2026-03-17 00:48:45 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:48:45.873548 | orchestrator | 2026-03-17 00:48:45 | INFO  | Task a4aff4c7-20d6-4834-a6f6-5e5571d6d7b0 is in state STARTED 2026-03-17 00:48:45.873555 | orchestrator | 2026-03-17 00:48:45 | INFO  | Task 2846e3e2-4b1e-498e-8a9b-d19693452662 is in state STARTED 2026-03-17 00:48:45.873562 | orchestrator | 2026-03-17 00:48:45 | INFO  | Task 1e48b722-57d3-4817-987d-52a2734d7b95 is in state STARTED 2026-03-17 00:48:45.873569 | orchestrator | 2026-03-17 00:48:45 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:48:48.909720 | orchestrator | 2026-03-17 00:48:48 | INFO  | Task dd642824-f8eb-41da-b6b9-b5841a44d679 is in state STARTED 2026-03-17 00:48:48.910678 | orchestrator | 2026-03-17 00:48:48 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:48:48.912437 | orchestrator | 2026-03-17 00:48:48 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:48:48.913115 | orchestrator | 2026-03-17 00:48:48 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:48:48.913796 | orchestrator | 2026-03-17 00:48:48 | INFO  | Task a4aff4c7-20d6-4834-a6f6-5e5571d6d7b0 is in state STARTED 2026-03-17 00:48:48.914575 | orchestrator | 2026-03-17 00:48:48 | INFO  | Task 2846e3e2-4b1e-498e-8a9b-d19693452662 is in state STARTED 2026-03-17 00:48:48.915252 | orchestrator | 2026-03-17 00:48:48 | INFO  | Task 1e48b722-57d3-4817-987d-52a2734d7b95 is in state STARTED 2026-03-17 00:48:48.915298 | orchestrator | 2026-03-17 00:48:48 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:48:51.948128 | orchestrator | 2026-03-17 00:48:51 | INFO  | Task dd642824-f8eb-41da-b6b9-b5841a44d679 is in state STARTED 2026-03-17 00:48:51.952699 | orchestrator | 2026-03-17 00:48:51 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:48:51.953974 | orchestrator | 2026-03-17 00:48:51 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:48:51.956208 | orchestrator | 2026-03-17 00:48:51 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:48:51.957923 | orchestrator | 2026-03-17 00:48:51 | INFO  | Task a4aff4c7-20d6-4834-a6f6-5e5571d6d7b0 is in state STARTED 2026-03-17 00:48:51.960506 | orchestrator | 2026-03-17 00:48:51 | INFO  | Task 2846e3e2-4b1e-498e-8a9b-d19693452662 is in state STARTED 2026-03-17 00:48:51.962609 | orchestrator | 2026-03-17 00:48:51 | INFO  | Task 1e48b722-57d3-4817-987d-52a2734d7b95 is in state STARTED 2026-03-17 00:48:51.962849 | orchestrator | 2026-03-17 00:48:51 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:48:55.047608 | orchestrator | 2026-03-17 00:48:55 | INFO  | Task dd642824-f8eb-41da-b6b9-b5841a44d679 is in state STARTED 2026-03-17 00:48:55.053019 | orchestrator | 2026-03-17 00:48:55 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:48:55.053675 | orchestrator | 2026-03-17 00:48:55 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:48:55.054826 | orchestrator | 2026-03-17 00:48:55 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:48:55.055663 | orchestrator | 2026-03-17 00:48:55 | INFO  | Task a4aff4c7-20d6-4834-a6f6-5e5571d6d7b0 is in state STARTED 2026-03-17 00:48:55.056419 | orchestrator | 2026-03-17 00:48:55 | INFO  | Task 2846e3e2-4b1e-498e-8a9b-d19693452662 is in state STARTED 2026-03-17 00:48:55.057897 | orchestrator | 2026-03-17 00:48:55 | INFO  | Task 1e48b722-57d3-4817-987d-52a2734d7b95 is in state STARTED 2026-03-17 00:48:55.057942 | orchestrator | 2026-03-17 00:48:55 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:48:58.111547 | orchestrator | 2026-03-17 00:48:58 | INFO  | Task dd642824-f8eb-41da-b6b9-b5841a44d679 is in state STARTED 2026-03-17 00:48:58.114376 | orchestrator | 2026-03-17 00:48:58 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:48:58.115574 | orchestrator | 2026-03-17 00:48:58 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:48:58.116806 | orchestrator | 2026-03-17 00:48:58 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:48:58.117870 | orchestrator | 2026-03-17 00:48:58 | INFO  | Task a4aff4c7-20d6-4834-a6f6-5e5571d6d7b0 is in state STARTED 2026-03-17 00:48:58.119658 | orchestrator | 2026-03-17 00:48:58 | INFO  | Task 2846e3e2-4b1e-498e-8a9b-d19693452662 is in state STARTED 2026-03-17 00:48:58.120833 | orchestrator | 2026-03-17 00:48:58 | INFO  | Task 1e48b722-57d3-4817-987d-52a2734d7b95 is in state STARTED 2026-03-17 00:48:58.121048 | orchestrator | 2026-03-17 00:48:58 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:01.278299 | orchestrator | 2026-03-17 00:49:01 | INFO  | Task dd642824-f8eb-41da-b6b9-b5841a44d679 is in state STARTED 2026-03-17 00:49:01.278381 | orchestrator | 2026-03-17 00:49:01 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:49:01.279242 | orchestrator | 2026-03-17 00:49:01 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:49:01.279295 | orchestrator | 2026-03-17 00:49:01 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:49:01.279303 | orchestrator | 2026-03-17 00:49:01 | INFO  | Task a4aff4c7-20d6-4834-a6f6-5e5571d6d7b0 is in state STARTED 2026-03-17 00:49:01.279337 | orchestrator | 2026-03-17 00:49:01 | INFO  | Task 2846e3e2-4b1e-498e-8a9b-d19693452662 is in state STARTED 2026-03-17 00:49:01.279344 | orchestrator | 2026-03-17 00:49:01 | INFO  | Task 1e48b722-57d3-4817-987d-52a2734d7b95 is in state STARTED 2026-03-17 00:49:01.279351 | orchestrator | 2026-03-17 00:49:01 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:04.323032 | orchestrator | 2026-03-17 00:49:04 | INFO  | Task dd642824-f8eb-41da-b6b9-b5841a44d679 is in state STARTED 2026-03-17 00:49:04.323126 | orchestrator | 2026-03-17 00:49:04 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:49:04.325228 | orchestrator | 2026-03-17 00:49:04 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:49:04.326531 | orchestrator | 2026-03-17 00:49:04 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:49:04.326576 | orchestrator | 2026-03-17 00:49:04 | INFO  | Task a4aff4c7-20d6-4834-a6f6-5e5571d6d7b0 is in state STARTED 2026-03-17 00:49:04.327782 | orchestrator | 2026-03-17 00:49:04 | INFO  | Task 2846e3e2-4b1e-498e-8a9b-d19693452662 is in state STARTED 2026-03-17 00:49:04.331151 | orchestrator | 2026-03-17 00:49:04 | INFO  | Task 1e48b722-57d3-4817-987d-52a2734d7b95 is in state STARTED 2026-03-17 00:49:04.331200 | orchestrator | 2026-03-17 00:49:04 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:07.371913 | orchestrator | 2026-03-17 00:49:07 | INFO  | Task dd642824-f8eb-41da-b6b9-b5841a44d679 is in state STARTED 2026-03-17 00:49:07.371994 | orchestrator | 2026-03-17 00:49:07 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:49:07.390894 | orchestrator | 2026-03-17 00:49:07 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:49:07.390961 | orchestrator | 2026-03-17 00:49:07 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:49:07.390966 | orchestrator | 2026-03-17 00:49:07 | INFO  | Task a4aff4c7-20d6-4834-a6f6-5e5571d6d7b0 is in state STARTED 2026-03-17 00:49:07.390979 | orchestrator | 2026-03-17 00:49:07 | INFO  | Task 2846e3e2-4b1e-498e-8a9b-d19693452662 is in state STARTED 2026-03-17 00:49:07.390984 | orchestrator | 2026-03-17 00:49:07 | INFO  | Task 1e48b722-57d3-4817-987d-52a2734d7b95 is in state SUCCESS 2026-03-17 00:49:07.391011 | orchestrator | 2026-03-17 00:49:07 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:10.442097 | orchestrator | 2026-03-17 00:49:10 | INFO  | Task dd642824-f8eb-41da-b6b9-b5841a44d679 is in state STARTED 2026-03-17 00:49:10.442190 | orchestrator | 2026-03-17 00:49:10 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:49:10.442199 | orchestrator | 2026-03-17 00:49:10 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:49:10.442204 | orchestrator | 2026-03-17 00:49:10 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:49:10.442209 | orchestrator | 2026-03-17 00:49:10 | INFO  | Task a4aff4c7-20d6-4834-a6f6-5e5571d6d7b0 is in state STARTED 2026-03-17 00:49:10.442214 | orchestrator | 2026-03-17 00:49:10 | INFO  | Task 2846e3e2-4b1e-498e-8a9b-d19693452662 is in state STARTED 2026-03-17 00:49:10.442219 | orchestrator | 2026-03-17 00:49:10 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:13.448479 | orchestrator | 2026-03-17 00:49:13 | INFO  | Task dd642824-f8eb-41da-b6b9-b5841a44d679 is in state SUCCESS 2026-03-17 00:49:13.450928 | orchestrator | 2026-03-17 00:49:13 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:49:13.452388 | orchestrator | 2026-03-17 00:49:13 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:49:13.453233 | orchestrator | 2026-03-17 00:49:13 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:49:13.455660 | orchestrator | 2026-03-17 00:49:13 | INFO  | Task a4aff4c7-20d6-4834-a6f6-5e5571d6d7b0 is in state STARTED 2026-03-17 00:49:13.456233 | orchestrator | 2026-03-17 00:49:13 | INFO  | Task 2846e3e2-4b1e-498e-8a9b-d19693452662 is in state STARTED 2026-03-17 00:49:13.456266 | orchestrator | 2026-03-17 00:49:13 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:16.496114 | orchestrator | 2026-03-17 00:49:16 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:49:16.496857 | orchestrator | 2026-03-17 00:49:16 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:49:16.508966 | orchestrator | 2026-03-17 00:49:16 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:49:16.509610 | orchestrator | 2026-03-17 00:49:16 | INFO  | Task a4aff4c7-20d6-4834-a6f6-5e5571d6d7b0 is in state STARTED 2026-03-17 00:49:16.510592 | orchestrator | 2026-03-17 00:49:16 | INFO  | Task 2846e3e2-4b1e-498e-8a9b-d19693452662 is in state STARTED 2026-03-17 00:49:16.510644 | orchestrator | 2026-03-17 00:49:16 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:19.551421 | orchestrator | 2026-03-17 00:49:19 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:49:19.553979 | orchestrator | 2026-03-17 00:49:19 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:49:19.555790 | orchestrator | 2026-03-17 00:49:19 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:49:19.556991 | orchestrator | 2026-03-17 00:49:19 | INFO  | Task a4aff4c7-20d6-4834-a6f6-5e5571d6d7b0 is in state STARTED 2026-03-17 00:49:19.558449 | orchestrator | 2026-03-17 00:49:19 | INFO  | Task 2846e3e2-4b1e-498e-8a9b-d19693452662 is in state STARTED 2026-03-17 00:49:19.558500 | orchestrator | 2026-03-17 00:49:19 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:22.635294 | orchestrator | 2026-03-17 00:49:22 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:49:22.636592 | orchestrator | 2026-03-17 00:49:22 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:49:22.636629 | orchestrator | 2026-03-17 00:49:22 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:49:22.636968 | orchestrator | 2026-03-17 00:49:22 | INFO  | Task a4aff4c7-20d6-4834-a6f6-5e5571d6d7b0 is in state STARTED 2026-03-17 00:49:22.636981 | orchestrator | 2026-03-17 00:49:22 | INFO  | Task 2846e3e2-4b1e-498e-8a9b-d19693452662 is in state STARTED 2026-03-17 00:49:22.639213 | orchestrator | 2026-03-17 00:49:22 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:25.676796 | orchestrator | 2026-03-17 00:49:25 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:49:25.680630 | orchestrator | 2026-03-17 00:49:25 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:49:25.680919 | orchestrator | 2026-03-17 00:49:25 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:49:25.682318 | orchestrator | 2026-03-17 00:49:25 | INFO  | Task a4aff4c7-20d6-4834-a6f6-5e5571d6d7b0 is in state STARTED 2026-03-17 00:49:25.683537 | orchestrator | 2026-03-17 00:49:25 | INFO  | Task 2846e3e2-4b1e-498e-8a9b-d19693452662 is in state STARTED 2026-03-17 00:49:25.683571 | orchestrator | 2026-03-17 00:49:25 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:28.717549 | orchestrator | 2026-03-17 00:49:28 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:49:28.719476 | orchestrator | 2026-03-17 00:49:28 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:49:28.720976 | orchestrator | 2026-03-17 00:49:28 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:49:28.722197 | orchestrator | 2026-03-17 00:49:28 | INFO  | Task a4aff4c7-20d6-4834-a6f6-5e5571d6d7b0 is in state STARTED 2026-03-17 00:49:28.722238 | orchestrator | 2026-03-17 00:49:28 | INFO  | Task 2846e3e2-4b1e-498e-8a9b-d19693452662 is in state STARTED 2026-03-17 00:49:28.722319 | orchestrator | 2026-03-17 00:49:28 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:31.891536 | orchestrator | 2026-03-17 00:49:31 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:49:31.891623 | orchestrator | 2026-03-17 00:49:31 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:49:31.891632 | orchestrator | 2026-03-17 00:49:31 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:49:31.891639 | orchestrator | 2026-03-17 00:49:31 | INFO  | Task a4aff4c7-20d6-4834-a6f6-5e5571d6d7b0 is in state STARTED 2026-03-17 00:49:31.891646 | orchestrator | 2026-03-17 00:49:31 | INFO  | Task 2846e3e2-4b1e-498e-8a9b-d19693452662 is in state STARTED 2026-03-17 00:49:31.891654 | orchestrator | 2026-03-17 00:49:31 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:34.824963 | orchestrator | 2026-03-17 00:49:34 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:49:34.825024 | orchestrator | 2026-03-17 00:49:34 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:49:34.825035 | orchestrator | 2026-03-17 00:49:34 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:49:34.825043 | orchestrator | 2026-03-17 00:49:34 | INFO  | Task a4aff4c7-20d6-4834-a6f6-5e5571d6d7b0 is in state STARTED 2026-03-17 00:49:34.825051 | orchestrator | 2026-03-17 00:49:34 | INFO  | Task 2846e3e2-4b1e-498e-8a9b-d19693452662 is in state STARTED 2026-03-17 00:49:34.825059 | orchestrator | 2026-03-17 00:49:34 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:37.877798 | orchestrator | 2026-03-17 00:49:37 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:49:37.878700 | orchestrator | 2026-03-17 00:49:37 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:49:37.883014 | orchestrator | 2026-03-17 00:49:37 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:49:37.883104 | orchestrator | 2026-03-17 00:49:37 | INFO  | Task a4aff4c7-20d6-4834-a6f6-5e5571d6d7b0 is in state STARTED 2026-03-17 00:49:37.883117 | orchestrator | 2026-03-17 00:49:37 | INFO  | Task 2846e3e2-4b1e-498e-8a9b-d19693452662 is in state STARTED 2026-03-17 00:49:37.883126 | orchestrator | 2026-03-17 00:49:37 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:40.936139 | orchestrator | 2026-03-17 00:49:40 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:49:40.941237 | orchestrator | 2026-03-17 00:49:40 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:49:40.941961 | orchestrator | 2026-03-17 00:49:40 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:49:40.944977 | orchestrator | 2026-03-17 00:49:40 | INFO  | Task a4aff4c7-20d6-4834-a6f6-5e5571d6d7b0 is in state STARTED 2026-03-17 00:49:40.946484 | orchestrator | 2026-03-17 00:49:40 | INFO  | Task 2846e3e2-4b1e-498e-8a9b-d19693452662 is in state STARTED 2026-03-17 00:49:40.947808 | orchestrator | 2026-03-17 00:49:40 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:43.984365 | orchestrator | 2026-03-17 00:49:43 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:49:43.985188 | orchestrator | 2026-03-17 00:49:43 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:49:43.985501 | orchestrator | 2026-03-17 00:49:43 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:49:43.987302 | orchestrator | 2026-03-17 00:49:43 | INFO  | Task a4aff4c7-20d6-4834-a6f6-5e5571d6d7b0 is in state STARTED 2026-03-17 00:49:43.987348 | orchestrator | 2026-03-17 00:49:43 | INFO  | Task 2846e3e2-4b1e-498e-8a9b-d19693452662 is in state STARTED 2026-03-17 00:49:43.987357 | orchestrator | 2026-03-17 00:49:43 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:47.022443 | orchestrator | 2026-03-17 00:49:47 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:49:47.023998 | orchestrator | 2026-03-17 00:49:47 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:49:47.025684 | orchestrator | 2026-03-17 00:49:47 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:49:47.027397 | orchestrator | 2026-03-17 00:49:47 | INFO  | Task a4aff4c7-20d6-4834-a6f6-5e5571d6d7b0 is in state STARTED 2026-03-17 00:49:47.029341 | orchestrator | 2026-03-17 00:49:47 | INFO  | Task 2846e3e2-4b1e-498e-8a9b-d19693452662 is in state STARTED 2026-03-17 00:49:47.029370 | orchestrator | 2026-03-17 00:49:47 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:50.066774 | orchestrator | 2026-03-17 00:49:50 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:49:50.074233 | orchestrator | 2026-03-17 00:49:50 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:49:50.079566 | orchestrator | 2026-03-17 00:49:50 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:49:50.084487 | orchestrator | 2026-03-17 00:49:50 | INFO  | Task a4aff4c7-20d6-4834-a6f6-5e5571d6d7b0 is in state STARTED 2026-03-17 00:49:50.087698 | orchestrator | 2026-03-17 00:49:50 | INFO  | Task 2846e3e2-4b1e-498e-8a9b-d19693452662 is in state STARTED 2026-03-17 00:49:50.088477 | orchestrator | 2026-03-17 00:49:50 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:53.116452 | orchestrator | 2026-03-17 00:49:53 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:49:53.118248 | orchestrator | 2026-03-17 00:49:53 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:49:53.119410 | orchestrator | 2026-03-17 00:49:53 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:49:53.120369 | orchestrator | 2026-03-17 00:49:53 | INFO  | Task a4aff4c7-20d6-4834-a6f6-5e5571d6d7b0 is in state STARTED 2026-03-17 00:49:53.121874 | orchestrator | 2026-03-17 00:49:53 | INFO  | Task 2846e3e2-4b1e-498e-8a9b-d19693452662 is in state STARTED 2026-03-17 00:49:53.121897 | orchestrator | 2026-03-17 00:49:53 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:56.164298 | orchestrator | 2026-03-17 00:49:56 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:49:56.164351 | orchestrator | 2026-03-17 00:49:56 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:49:56.164374 | orchestrator | 2026-03-17 00:49:56 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:49:56.164380 | orchestrator | 2026-03-17 00:49:56 | INFO  | Task a4aff4c7-20d6-4834-a6f6-5e5571d6d7b0 is in state STARTED 2026-03-17 00:49:56.164385 | orchestrator | 2026-03-17 00:49:56 | INFO  | Task 2846e3e2-4b1e-498e-8a9b-d19693452662 is in state STARTED 2026-03-17 00:49:56.164391 | orchestrator | 2026-03-17 00:49:56 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:49:59.212659 | orchestrator | 2026-03-17 00:49:59 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:49:59.214901 | orchestrator | 2026-03-17 00:49:59 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:49:59.217257 | orchestrator | 2026-03-17 00:49:59 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:49:59.219100 | orchestrator | 2026-03-17 00:49:59 | INFO  | Task a4aff4c7-20d6-4834-a6f6-5e5571d6d7b0 is in state STARTED 2026-03-17 00:49:59.222202 | orchestrator | 2026-03-17 00:49:59.222314 | orchestrator | 2026-03-17 00:49:59.222341 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-03-17 00:49:59.222363 | orchestrator | 2026-03-17 00:49:59.222383 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-03-17 00:49:59.222402 | orchestrator | Tuesday 17 March 2026 00:48:28 +0000 (0:00:00.962) 0:00:00.962 ********* 2026-03-17 00:49:59.222422 | orchestrator | ok: [testbed-manager] => { 2026-03-17 00:49:59.222448 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-03-17 00:49:59.222468 | orchestrator | } 2026-03-17 00:49:59.222489 | orchestrator | 2026-03-17 00:49:59.222509 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-03-17 00:49:59.222528 | orchestrator | Tuesday 17 March 2026 00:48:29 +0000 (0:00:00.527) 0:00:01.489 ********* 2026-03-17 00:49:59.222547 | orchestrator | ok: [testbed-manager] 2026-03-17 00:49:59.222565 | orchestrator | 2026-03-17 00:49:59.222583 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-03-17 00:49:59.222601 | orchestrator | Tuesday 17 March 2026 00:48:30 +0000 (0:00:01.594) 0:00:03.084 ********* 2026-03-17 00:49:59.222618 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-03-17 00:49:59.222638 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-03-17 00:49:59.222657 | orchestrator | 2026-03-17 00:49:59.222677 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-03-17 00:49:59.222696 | orchestrator | Tuesday 17 March 2026 00:48:31 +0000 (0:00:01.213) 0:00:04.297 ********* 2026-03-17 00:49:59.222715 | orchestrator | changed: [testbed-manager] 2026-03-17 00:49:59.222735 | orchestrator | 2026-03-17 00:49:59.222755 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-03-17 00:49:59.222800 | orchestrator | Tuesday 17 March 2026 00:48:36 +0000 (0:00:04.194) 0:00:08.491 ********* 2026-03-17 00:49:59.222819 | orchestrator | changed: [testbed-manager] 2026-03-17 00:49:59.222837 | orchestrator | 2026-03-17 00:49:59.222854 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-03-17 00:49:59.222872 | orchestrator | Tuesday 17 March 2026 00:48:37 +0000 (0:00:01.659) 0:00:10.151 ********* 2026-03-17 00:49:59.222890 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-03-17 00:49:59.222908 | orchestrator | ok: [testbed-manager] 2026-03-17 00:49:59.222927 | orchestrator | 2026-03-17 00:49:59.222944 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-03-17 00:49:59.222963 | orchestrator | Tuesday 17 March 2026 00:49:04 +0000 (0:00:26.629) 0:00:36.781 ********* 2026-03-17 00:49:59.222980 | orchestrator | changed: [testbed-manager] 2026-03-17 00:49:59.222999 | orchestrator | 2026-03-17 00:49:59.223017 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:49:59.223064 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:49:59.223086 | orchestrator | 2026-03-17 00:49:59.223106 | orchestrator | 2026-03-17 00:49:59.223125 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:49:59.223145 | orchestrator | Tuesday 17 March 2026 00:49:05 +0000 (0:00:01.671) 0:00:38.452 ********* 2026-03-17 00:49:59.223164 | orchestrator | =============================================================================== 2026-03-17 00:49:59.223182 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.63s 2026-03-17 00:49:59.223200 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 4.19s 2026-03-17 00:49:59.223219 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.67s 2026-03-17 00:49:59.223239 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.66s 2026-03-17 00:49:59.223259 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.59s 2026-03-17 00:49:59.223278 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.21s 2026-03-17 00:49:59.223298 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.53s 2026-03-17 00:49:59.223318 | orchestrator | 2026-03-17 00:49:59.223336 | orchestrator | 2026-03-17 00:49:59.223355 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-17 00:49:59.223374 | orchestrator | 2026-03-17 00:49:59.223393 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-17 00:49:59.223412 | orchestrator | Tuesday 17 March 2026 00:48:27 +0000 (0:00:00.499) 0:00:00.499 ********* 2026-03-17 00:49:59.223430 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-17 00:49:59.223447 | orchestrator | 2026-03-17 00:49:59.223463 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-17 00:49:59.223479 | orchestrator | Tuesday 17 March 2026 00:48:27 +0000 (0:00:00.436) 0:00:00.935 ********* 2026-03-17 00:49:59.223495 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-17 00:49:59.223511 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-17 00:49:59.223528 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-17 00:49:59.223545 | orchestrator | 2026-03-17 00:49:59.223562 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-17 00:49:59.223578 | orchestrator | Tuesday 17 March 2026 00:48:29 +0000 (0:00:02.113) 0:00:03.049 ********* 2026-03-17 00:49:59.223592 | orchestrator | changed: [testbed-manager] 2026-03-17 00:49:59.223602 | orchestrator | 2026-03-17 00:49:59.223612 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-17 00:49:59.223621 | orchestrator | Tuesday 17 March 2026 00:48:32 +0000 (0:00:02.517) 0:00:05.566 ********* 2026-03-17 00:49:59.223650 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-17 00:49:59.223667 | orchestrator | ok: [testbed-manager] 2026-03-17 00:49:59.223684 | orchestrator | 2026-03-17 00:49:59.223700 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-17 00:49:59.223718 | orchestrator | Tuesday 17 March 2026 00:49:05 +0000 (0:00:33.385) 0:00:38.952 ********* 2026-03-17 00:49:59.223735 | orchestrator | changed: [testbed-manager] 2026-03-17 00:49:59.223752 | orchestrator | 2026-03-17 00:49:59.223793 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-17 00:49:59.223814 | orchestrator | Tuesday 17 March 2026 00:49:06 +0000 (0:00:00.948) 0:00:39.900 ********* 2026-03-17 00:49:59.223824 | orchestrator | ok: [testbed-manager] 2026-03-17 00:49:59.223833 | orchestrator | 2026-03-17 00:49:59.223843 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-17 00:49:59.223863 | orchestrator | Tuesday 17 March 2026 00:49:07 +0000 (0:00:00.730) 0:00:40.631 ********* 2026-03-17 00:49:59.223873 | orchestrator | changed: [testbed-manager] 2026-03-17 00:49:59.223882 | orchestrator | 2026-03-17 00:49:59.223892 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-17 00:49:59.223901 | orchestrator | Tuesday 17 March 2026 00:49:08 +0000 (0:00:01.301) 0:00:41.932 ********* 2026-03-17 00:49:59.223911 | orchestrator | changed: [testbed-manager] 2026-03-17 00:49:59.223922 | orchestrator | 2026-03-17 00:49:59.223939 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-17 00:49:59.223955 | orchestrator | Tuesday 17 March 2026 00:49:09 +0000 (0:00:00.640) 0:00:42.573 ********* 2026-03-17 00:49:59.223971 | orchestrator | changed: [testbed-manager] 2026-03-17 00:49:59.223988 | orchestrator | 2026-03-17 00:49:59.224005 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-17 00:49:59.224022 | orchestrator | Tuesday 17 March 2026 00:49:09 +0000 (0:00:00.841) 0:00:43.414 ********* 2026-03-17 00:49:59.224035 | orchestrator | ok: [testbed-manager] 2026-03-17 00:49:59.224045 | orchestrator | 2026-03-17 00:49:59.224055 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:49:59.224065 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:49:59.224075 | orchestrator | 2026-03-17 00:49:59.224084 | orchestrator | 2026-03-17 00:49:59.224094 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:49:59.224104 | orchestrator | Tuesday 17 March 2026 00:49:10 +0000 (0:00:00.390) 0:00:43.805 ********* 2026-03-17 00:49:59.224113 | orchestrator | =============================================================================== 2026-03-17 00:49:59.224123 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 33.39s 2026-03-17 00:49:59.224133 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.52s 2026-03-17 00:49:59.224143 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.11s 2026-03-17 00:49:59.224152 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.30s 2026-03-17 00:49:59.224162 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.95s 2026-03-17 00:49:59.224171 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.84s 2026-03-17 00:49:59.224181 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.73s 2026-03-17 00:49:59.224190 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.64s 2026-03-17 00:49:59.224200 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.44s 2026-03-17 00:49:59.224209 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.39s 2026-03-17 00:49:59.224219 | orchestrator | 2026-03-17 00:49:59.224229 | orchestrator | 2026-03-17 00:49:59.224238 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 00:49:59.224248 | orchestrator | 2026-03-17 00:49:59.224258 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 00:49:59.224268 | orchestrator | Tuesday 17 March 2026 00:48:27 +0000 (0:00:00.279) 0:00:00.279 ********* 2026-03-17 00:49:59.224277 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-17 00:49:59.224287 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-17 00:49:59.224296 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-17 00:49:59.224306 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-17 00:49:59.224315 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-17 00:49:59.224324 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-17 00:49:59.224334 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-17 00:49:59.224344 | orchestrator | 2026-03-17 00:49:59.224360 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-17 00:49:59.224370 | orchestrator | 2026-03-17 00:49:59.224380 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-17 00:49:59.224390 | orchestrator | Tuesday 17 March 2026 00:48:29 +0000 (0:00:01.872) 0:00:02.152 ********* 2026-03-17 00:49:59.224412 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-5, testbed-node-4 2026-03-17 00:49:59.224424 | orchestrator | 2026-03-17 00:49:59.224434 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-17 00:49:59.224443 | orchestrator | Tuesday 17 March 2026 00:48:30 +0000 (0:00:01.329) 0:00:03.481 ********* 2026-03-17 00:49:59.224453 | orchestrator | ok: [testbed-manager] 2026-03-17 00:49:59.224463 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:49:59.224472 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:49:59.224482 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:49:59.224492 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:49:59.224510 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:49:59.224520 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:49:59.224530 | orchestrator | 2026-03-17 00:49:59.224540 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-17 00:49:59.224549 | orchestrator | Tuesday 17 March 2026 00:48:32 +0000 (0:00:01.646) 0:00:05.128 ********* 2026-03-17 00:49:59.224559 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:49:59.224569 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:49:59.224578 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:49:59.224588 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:49:59.224598 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:49:59.224611 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:49:59.224621 | orchestrator | ok: [testbed-manager] 2026-03-17 00:49:59.224631 | orchestrator | 2026-03-17 00:49:59.224641 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-17 00:49:59.224651 | orchestrator | Tuesday 17 March 2026 00:48:35 +0000 (0:00:03.304) 0:00:08.432 ********* 2026-03-17 00:49:59.224661 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:49:59.224670 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:49:59.224680 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:49:59.224690 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:49:59.224699 | orchestrator | changed: [testbed-manager] 2026-03-17 00:49:59.224709 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:49:59.224719 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:49:59.224728 | orchestrator | 2026-03-17 00:49:59.224738 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-17 00:49:59.224748 | orchestrator | Tuesday 17 March 2026 00:48:37 +0000 (0:00:01.929) 0:00:10.361 ********* 2026-03-17 00:49:59.224758 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:49:59.224842 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:49:59.224856 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:49:59.224866 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:49:59.224875 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:49:59.224885 | orchestrator | changed: [testbed-manager] 2026-03-17 00:49:59.224895 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:49:59.224905 | orchestrator | 2026-03-17 00:49:59.224914 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-17 00:49:59.224929 | orchestrator | Tuesday 17 March 2026 00:48:49 +0000 (0:00:11.719) 0:00:22.081 ********* 2026-03-17 00:49:59.224946 | orchestrator | changed: [testbed-manager] 2026-03-17 00:49:59.224963 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:49:59.224980 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:49:59.224998 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:49:59.225017 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:49:59.225037 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:49:59.225055 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:49:59.225076 | orchestrator | 2026-03-17 00:49:59.225086 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-17 00:49:59.225095 | orchestrator | Tuesday 17 March 2026 00:49:28 +0000 (0:00:39.622) 0:01:01.703 ********* 2026-03-17 00:49:59.225106 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:49:59.225117 | orchestrator | 2026-03-17 00:49:59.225127 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-17 00:49:59.225137 | orchestrator | Tuesday 17 March 2026 00:49:30 +0000 (0:00:01.594) 0:01:03.298 ********* 2026-03-17 00:49:59.225146 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-17 00:49:59.225156 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-17 00:49:59.225166 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-17 00:49:59.225176 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-17 00:49:59.225185 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-17 00:49:59.225195 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-17 00:49:59.225204 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-17 00:49:59.225214 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-17 00:49:59.225223 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-17 00:49:59.225233 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-17 00:49:59.225243 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-17 00:49:59.225252 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-17 00:49:59.225262 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-17 00:49:59.225271 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-17 00:49:59.225281 | orchestrator | 2026-03-17 00:49:59.225291 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-17 00:49:59.225301 | orchestrator | Tuesday 17 March 2026 00:49:35 +0000 (0:00:04.941) 0:01:08.240 ********* 2026-03-17 00:49:59.225310 | orchestrator | ok: [testbed-manager] 2026-03-17 00:49:59.225320 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:49:59.225330 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:49:59.225339 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:49:59.225349 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:49:59.225359 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:49:59.225368 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:49:59.225377 | orchestrator | 2026-03-17 00:49:59.225387 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-17 00:49:59.225397 | orchestrator | Tuesday 17 March 2026 00:49:36 +0000 (0:00:01.229) 0:01:09.470 ********* 2026-03-17 00:49:59.225407 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:49:59.225416 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:49:59.225426 | orchestrator | changed: [testbed-manager] 2026-03-17 00:49:59.225436 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:49:59.225446 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:49:59.225455 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:49:59.225465 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:49:59.225475 | orchestrator | 2026-03-17 00:49:59.225485 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-17 00:49:59.225503 | orchestrator | Tuesday 17 March 2026 00:49:38 +0000 (0:00:01.881) 0:01:11.351 ********* 2026-03-17 00:49:59.225512 | orchestrator | ok: [testbed-manager] 2026-03-17 00:49:59.225522 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:49:59.225531 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:49:59.225541 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:49:59.225551 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:49:59.225560 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:49:59.225569 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:49:59.225585 | orchestrator | 2026-03-17 00:49:59.225595 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-17 00:49:59.225615 | orchestrator | Tuesday 17 March 2026 00:49:40 +0000 (0:00:01.674) 0:01:13.026 ********* 2026-03-17 00:49:59.225626 | orchestrator | ok: [testbed-manager] 2026-03-17 00:49:59.225635 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:49:59.225645 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:49:59.225655 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:49:59.225665 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:49:59.225674 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:49:59.225684 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:49:59.225693 | orchestrator | 2026-03-17 00:49:59.225703 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-17 00:49:59.225713 | orchestrator | Tuesday 17 March 2026 00:49:42 +0000 (0:00:02.738) 0:01:15.765 ********* 2026-03-17 00:49:59.225723 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-17 00:49:59.225734 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:49:59.225744 | orchestrator | 2026-03-17 00:49:59.225754 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-17 00:49:59.225764 | orchestrator | Tuesday 17 March 2026 00:49:44 +0000 (0:00:01.351) 0:01:17.116 ********* 2026-03-17 00:49:59.225800 | orchestrator | changed: [testbed-manager] 2026-03-17 00:49:59.225810 | orchestrator | 2026-03-17 00:49:59.225819 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-17 00:49:59.225829 | orchestrator | Tuesday 17 March 2026 00:49:45 +0000 (0:00:01.727) 0:01:18.843 ********* 2026-03-17 00:49:59.225838 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:49:59.225848 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:49:59.225858 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:49:59.225867 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:49:59.225876 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:49:59.225886 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:49:59.225895 | orchestrator | changed: [testbed-manager] 2026-03-17 00:49:59.225905 | orchestrator | 2026-03-17 00:49:59.225915 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:49:59.225928 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:49:59.225945 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:49:59.225961 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:49:59.225978 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:49:59.225996 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:49:59.226012 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:49:59.226068 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:49:59.226078 | orchestrator | 2026-03-17 00:49:59.226088 | orchestrator | 2026-03-17 00:49:59.226098 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:49:59.226107 | orchestrator | Tuesday 17 March 2026 00:49:57 +0000 (0:00:11.265) 0:01:30.109 ********* 2026-03-17 00:49:59.226118 | orchestrator | =============================================================================== 2026-03-17 00:49:59.226135 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 39.62s 2026-03-17 00:49:59.226146 | orchestrator | osism.services.netdata : Add repository -------------------------------- 11.72s 2026-03-17 00:49:59.226155 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.27s 2026-03-17 00:49:59.226165 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.94s 2026-03-17 00:49:59.226175 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.30s 2026-03-17 00:49:59.226184 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.74s 2026-03-17 00:49:59.226194 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.93s 2026-03-17 00:49:59.226204 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.88s 2026-03-17 00:49:59.226213 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.87s 2026-03-17 00:49:59.226223 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.73s 2026-03-17 00:49:59.226233 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.67s 2026-03-17 00:49:59.226251 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.65s 2026-03-17 00:49:59.226261 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.59s 2026-03-17 00:49:59.226270 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.35s 2026-03-17 00:49:59.226281 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.33s 2026-03-17 00:49:59.226291 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.23s 2026-03-17 00:49:59.226305 | orchestrator | 2026-03-17 00:49:59 | INFO  | Task 2846e3e2-4b1e-498e-8a9b-d19693452662 is in state SUCCESS 2026-03-17 00:49:59.226315 | orchestrator | 2026-03-17 00:49:59 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:02.265222 | orchestrator | 2026-03-17 00:50:02 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:50:02.265380 | orchestrator | 2026-03-17 00:50:02 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:50:02.266402 | orchestrator | 2026-03-17 00:50:02 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:50:02.267390 | orchestrator | 2026-03-17 00:50:02 | INFO  | Task a4aff4c7-20d6-4834-a6f6-5e5571d6d7b0 is in state STARTED 2026-03-17 00:50:02.267470 | orchestrator | 2026-03-17 00:50:02 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:05.316135 | orchestrator | 2026-03-17 00:50:05 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:50:05.318609 | orchestrator | 2026-03-17 00:50:05 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:50:05.321556 | orchestrator | 2026-03-17 00:50:05 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:50:05.327396 | orchestrator | 2026-03-17 00:50:05 | INFO  | Task a4aff4c7-20d6-4834-a6f6-5e5571d6d7b0 is in state STARTED 2026-03-17 00:50:05.327445 | orchestrator | 2026-03-17 00:50:05 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:08.372009 | orchestrator | 2026-03-17 00:50:08 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:50:08.372934 | orchestrator | 2026-03-17 00:50:08 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:50:08.373716 | orchestrator | 2026-03-17 00:50:08 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:50:08.374702 | orchestrator | 2026-03-17 00:50:08 | INFO  | Task a4aff4c7-20d6-4834-a6f6-5e5571d6d7b0 is in state STARTED 2026-03-17 00:50:08.374735 | orchestrator | 2026-03-17 00:50:08 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:11.427046 | orchestrator | 2026-03-17 00:50:11 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:50:11.428993 | orchestrator | 2026-03-17 00:50:11 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:50:11.430720 | orchestrator | 2026-03-17 00:50:11 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:50:11.433889 | orchestrator | 2026-03-17 00:50:11 | INFO  | Task a4aff4c7-20d6-4834-a6f6-5e5571d6d7b0 is in state STARTED 2026-03-17 00:50:11.433934 | orchestrator | 2026-03-17 00:50:11 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:14.479928 | orchestrator | 2026-03-17 00:50:14 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:50:14.480937 | orchestrator | 2026-03-17 00:50:14 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:50:14.481775 | orchestrator | 2026-03-17 00:50:14 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:50:14.483337 | orchestrator | 2026-03-17 00:50:14 | INFO  | Task a4aff4c7-20d6-4834-a6f6-5e5571d6d7b0 is in state STARTED 2026-03-17 00:50:14.483965 | orchestrator | 2026-03-17 00:50:14 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:17.538153 | orchestrator | 2026-03-17 00:50:17 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:50:17.540979 | orchestrator | 2026-03-17 00:50:17 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:50:17.543493 | orchestrator | 2026-03-17 00:50:17 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:50:17.545959 | orchestrator | 2026-03-17 00:50:17 | INFO  | Task a4aff4c7-20d6-4834-a6f6-5e5571d6d7b0 is in state STARTED 2026-03-17 00:50:17.546060 | orchestrator | 2026-03-17 00:50:17 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:20.597977 | orchestrator | 2026-03-17 00:50:20 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:50:20.600649 | orchestrator | 2026-03-17 00:50:20 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:50:20.602842 | orchestrator | 2026-03-17 00:50:20 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:50:20.603386 | orchestrator | 2026-03-17 00:50:20 | INFO  | Task a4aff4c7-20d6-4834-a6f6-5e5571d6d7b0 is in state SUCCESS 2026-03-17 00:50:20.603754 | orchestrator | 2026-03-17 00:50:20 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:23.658569 | orchestrator | 2026-03-17 00:50:23 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:50:23.663376 | orchestrator | 2026-03-17 00:50:23 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:50:23.665068 | orchestrator | 2026-03-17 00:50:23 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:50:23.665111 | orchestrator | 2026-03-17 00:50:23 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:26.705090 | orchestrator | 2026-03-17 00:50:26 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:50:26.706305 | orchestrator | 2026-03-17 00:50:26 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:50:26.707983 | orchestrator | 2026-03-17 00:50:26 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:50:26.708032 | orchestrator | 2026-03-17 00:50:26 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:29.747989 | orchestrator | 2026-03-17 00:50:29 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:50:29.748298 | orchestrator | 2026-03-17 00:50:29 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:50:29.749204 | orchestrator | 2026-03-17 00:50:29 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:50:29.749227 | orchestrator | 2026-03-17 00:50:29 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:32.786211 | orchestrator | 2026-03-17 00:50:32 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:50:32.787421 | orchestrator | 2026-03-17 00:50:32 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:50:32.790046 | orchestrator | 2026-03-17 00:50:32 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:50:32.790106 | orchestrator | 2026-03-17 00:50:32 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:35.824509 | orchestrator | 2026-03-17 00:50:35 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:50:35.826129 | orchestrator | 2026-03-17 00:50:35 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:50:35.828152 | orchestrator | 2026-03-17 00:50:35 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:50:35.828263 | orchestrator | 2026-03-17 00:50:35 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:38.862460 | orchestrator | 2026-03-17 00:50:38 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:50:38.863241 | orchestrator | 2026-03-17 00:50:38 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:50:38.864583 | orchestrator | 2026-03-17 00:50:38 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state STARTED 2026-03-17 00:50:38.864610 | orchestrator | 2026-03-17 00:50:38 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:41.902771 | orchestrator | 2026-03-17 00:50:41 | INFO  | Task f02df20b-aa5a-4622-b7bc-3de80d4b5738 is in state STARTED 2026-03-17 00:50:41.902837 | orchestrator | 2026-03-17 00:50:41 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:50:41.903268 | orchestrator | 2026-03-17 00:50:41 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:50:41.903935 | orchestrator | 2026-03-17 00:50:41 | INFO  | Task bbb7764b-1490-4fab-b4cd-d111355fd6b4 is in state STARTED 2026-03-17 00:50:41.914178 | orchestrator | 2026-03-17 00:50:41 | INFO  | Task a87b70ec-90a4-4624-86a2-3872d73336f1 is in state SUCCESS 2026-03-17 00:50:41.917011 | orchestrator | 2026-03-17 00:50:41.917074 | orchestrator | 2026-03-17 00:50:41.917087 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-03-17 00:50:41.917098 | orchestrator | 2026-03-17 00:50:41.917109 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-03-17 00:50:41.917119 | orchestrator | Tuesday 17 March 2026 00:48:45 +0000 (0:00:00.188) 0:00:00.188 ********* 2026-03-17 00:50:41.917129 | orchestrator | ok: [testbed-manager] 2026-03-17 00:50:41.917139 | orchestrator | 2026-03-17 00:50:41.917150 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-03-17 00:50:41.917160 | orchestrator | Tuesday 17 March 2026 00:48:46 +0000 (0:00:00.788) 0:00:00.977 ********* 2026-03-17 00:50:41.917169 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-03-17 00:50:41.917179 | orchestrator | 2026-03-17 00:50:41.917189 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-03-17 00:50:41.917215 | orchestrator | Tuesday 17 March 2026 00:48:46 +0000 (0:00:00.791) 0:00:01.768 ********* 2026-03-17 00:50:41.917225 | orchestrator | changed: [testbed-manager] 2026-03-17 00:50:41.917235 | orchestrator | 2026-03-17 00:50:41.917245 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-03-17 00:50:41.917254 | orchestrator | Tuesday 17 March 2026 00:48:47 +0000 (0:00:00.881) 0:00:02.649 ********* 2026-03-17 00:50:41.917264 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-03-17 00:50:41.917273 | orchestrator | ok: [testbed-manager] 2026-03-17 00:50:41.917283 | orchestrator | 2026-03-17 00:50:41.917293 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-03-17 00:50:41.917302 | orchestrator | Tuesday 17 March 2026 00:50:06 +0000 (0:01:18.920) 0:01:21.570 ********* 2026-03-17 00:50:41.917312 | orchestrator | changed: [testbed-manager] 2026-03-17 00:50:41.917321 | orchestrator | 2026-03-17 00:50:41.917331 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:50:41.917350 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:50:41.917362 | orchestrator | 2026-03-17 00:50:41.917371 | orchestrator | 2026-03-17 00:50:41.917381 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:50:41.917390 | orchestrator | Tuesday 17 March 2026 00:50:18 +0000 (0:00:11.626) 0:01:33.196 ********* 2026-03-17 00:50:41.917400 | orchestrator | =============================================================================== 2026-03-17 00:50:41.917409 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 78.92s 2026-03-17 00:50:41.917419 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ----------------- 11.63s 2026-03-17 00:50:41.917428 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 0.88s 2026-03-17 00:50:41.917438 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.79s 2026-03-17 00:50:41.917474 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.79s 2026-03-17 00:50:41.917485 | orchestrator | 2026-03-17 00:50:41.917494 | orchestrator | 2026-03-17 00:50:41.917504 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-17 00:50:41.917513 | orchestrator | 2026-03-17 00:50:41.917523 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-17 00:50:41.917532 | orchestrator | Tuesday 17 March 2026 00:48:20 +0000 (0:00:00.208) 0:00:00.208 ********* 2026-03-17 00:50:41.917542 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:50:41.917552 | orchestrator | 2026-03-17 00:50:41.917562 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-17 00:50:41.917572 | orchestrator | Tuesday 17 March 2026 00:48:21 +0000 (0:00:01.184) 0:00:01.392 ********* 2026-03-17 00:50:41.917581 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-17 00:50:41.917591 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-17 00:50:41.917600 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-17 00:50:41.917610 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-17 00:50:41.917619 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-17 00:50:41.917629 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-17 00:50:41.917639 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-17 00:50:41.917648 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-17 00:50:41.917658 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-17 00:50:41.917674 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-17 00:50:41.917684 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-17 00:50:41.917693 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-17 00:50:41.917703 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-17 00:50:41.917713 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-17 00:50:41.917723 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-17 00:50:41.917732 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-17 00:50:41.917781 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-17 00:50:41.917794 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-17 00:50:41.917804 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-17 00:50:41.917813 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-17 00:50:41.917823 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-17 00:50:41.917832 | orchestrator | 2026-03-17 00:50:41.917842 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-17 00:50:41.917906 | orchestrator | Tuesday 17 March 2026 00:48:26 +0000 (0:00:04.143) 0:00:05.536 ********* 2026-03-17 00:50:41.917922 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:50:41.917932 | orchestrator | 2026-03-17 00:50:41.917942 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-17 00:50:41.917952 | orchestrator | Tuesday 17 March 2026 00:48:27 +0000 (0:00:01.212) 0:00:06.748 ********* 2026-03-17 00:50:41.917965 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:50:41.917979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:50:41.917990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:50:41.918000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:50:41.918069 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:50:41.918123 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.918142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.918156 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:50:41.918174 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.918188 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:50:41.918202 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.918224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.918245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.918302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.918323 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.918337 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.918351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.918364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.918385 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.918400 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.918413 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.918426 | orchestrator | 2026-03-17 00:50:41.918440 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-17 00:50:41.918491 | orchestrator | Tuesday 17 March 2026 00:48:33 +0000 (0:00:06.127) 0:00:12.876 ********* 2026-03-17 00:50:41.918509 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:50:41.918530 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:50:41.918544 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:50:41.918559 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:50:41.918574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:50:41.918594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:50:41.918603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:50:41.918611 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:50:41.918619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:50:41.918662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:50:41.918676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:50:41.918684 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:50:41.918692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:50:41.918701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:50:41.918714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:50:41.918722 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:50:41.918730 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:50:41.918739 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:50:41.918750 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:50:41.918758 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:50:41.918767 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:50:41.918778 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:50:41.918787 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:50:41.918803 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:50:41.918811 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:50:41.918820 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:50:41.919088 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:41.919100 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:41.919107 | orchestrator | 2026-03-17 00:50:41.919115 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-17 00:50:41.919122 | orchestrator | Tuesday 17 March 2026 00:48:34 +0000 (0:00:01.127) 0:00:14.003 ********* 2026-03-17 00:50:41.919129 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:50:41.919152 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:50:41.919163 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:50:41.919173 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:50:41.919180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:50:41.919199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:50:41.919207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:50:41.919214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:50:41.919221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:50:41.919237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:50:41.919244 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:50:41.919251 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:50:41.919260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:50:41.919268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:50:41.919279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:50:41.919286 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:50:41.919294 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:50:41.919301 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:50:41.919308 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:50:41.919315 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:50:41.919321 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:50:41.919333 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:50:41.919343 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:50:41.919354 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:41.919361 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:50:41.919368 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:50:41.919375 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:50:41.919382 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:41.919389 | orchestrator | 2026-03-17 00:50:41.919396 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-17 00:50:41.919403 | orchestrator | Tuesday 17 March 2026 00:48:37 +0000 (0:00:02.914) 0:00:16.917 ********* 2026-03-17 00:50:41.919409 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:50:41.919416 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:50:41.919422 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:50:41.919429 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:50:41.919436 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:50:41.919443 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:41.919449 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:41.919456 | orchestrator | 2026-03-17 00:50:41.919462 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-17 00:50:41.919469 | orchestrator | Tuesday 17 March 2026 00:48:39 +0000 (0:00:01.712) 0:00:18.630 ********* 2026-03-17 00:50:41.919476 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:50:41.919482 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:50:41.919489 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:50:41.919496 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:50:41.919505 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:50:41.919515 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:41.919522 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:41.919529 | orchestrator | 2026-03-17 00:50:41.919535 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-17 00:50:41.919542 | orchestrator | Tuesday 17 March 2026 00:48:40 +0000 (0:00:01.595) 0:00:20.226 ********* 2026-03-17 00:50:41.919553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:50:41.919593 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:50:41.919608 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:50:41.919619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:50:41.919627 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:50:41.919634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:50:41.919641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.919648 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:50:41.919665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.919675 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.919682 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.919689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.919696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.919705 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.919713 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.919727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.919738 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.919747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.919754 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.919762 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.919773 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.919784 | orchestrator | 2026-03-17 00:50:41.919792 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-17 00:50:41.919800 | orchestrator | Tuesday 17 March 2026 00:48:47 +0000 (0:00:06.231) 0:00:26.457 ********* 2026-03-17 00:50:41.919807 | orchestrator | [WARNING]: Skipped 2026-03-17 00:50:41.919815 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-17 00:50:41.919823 | orchestrator | to this access issue: 2026-03-17 00:50:41.919830 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-17 00:50:41.919837 | orchestrator | directory 2026-03-17 00:50:41.919872 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 00:50:41.919884 | orchestrator | 2026-03-17 00:50:41.919892 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-17 00:50:41.919900 | orchestrator | Tuesday 17 March 2026 00:48:48 +0000 (0:00:01.346) 0:00:27.803 ********* 2026-03-17 00:50:41.919907 | orchestrator | [WARNING]: Skipped 2026-03-17 00:50:41.919915 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-17 00:50:41.919922 | orchestrator | to this access issue: 2026-03-17 00:50:41.919929 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-17 00:50:41.919936 | orchestrator | directory 2026-03-17 00:50:41.919943 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 00:50:41.919949 | orchestrator | 2026-03-17 00:50:41.919956 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-17 00:50:41.919971 | orchestrator | Tuesday 17 March 2026 00:48:49 +0000 (0:00:00.854) 0:00:28.658 ********* 2026-03-17 00:50:41.919984 | orchestrator | [WARNING]: Skipped 2026-03-17 00:50:41.919991 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-17 00:50:41.919998 | orchestrator | to this access issue: 2026-03-17 00:50:41.920004 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-17 00:50:41.920011 | orchestrator | directory 2026-03-17 00:50:41.920018 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 00:50:41.920024 | orchestrator | 2026-03-17 00:50:41.920035 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-17 00:50:41.920042 | orchestrator | Tuesday 17 March 2026 00:48:50 +0000 (0:00:01.144) 0:00:29.802 ********* 2026-03-17 00:50:41.920049 | orchestrator | [WARNING]: Skipped 2026-03-17 00:50:41.920055 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-17 00:50:41.920062 | orchestrator | to this access issue: 2026-03-17 00:50:41.920070 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-17 00:50:41.920081 | orchestrator | directory 2026-03-17 00:50:41.920088 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 00:50:41.920095 | orchestrator | 2026-03-17 00:50:41.920102 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-17 00:50:41.920108 | orchestrator | Tuesday 17 March 2026 00:48:51 +0000 (0:00:00.642) 0:00:30.445 ********* 2026-03-17 00:50:41.920115 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:50:41.920122 | orchestrator | changed: [testbed-manager] 2026-03-17 00:50:41.920131 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:50:41.920138 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:50:41.920145 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:50:41.920151 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:50:41.920158 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:50:41.920164 | orchestrator | 2026-03-17 00:50:41.920171 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-17 00:50:41.920179 | orchestrator | Tuesday 17 March 2026 00:48:54 +0000 (0:00:03.910) 0:00:34.356 ********* 2026-03-17 00:50:41.920191 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-17 00:50:41.920201 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-17 00:50:41.920210 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-17 00:50:41.920220 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-17 00:50:41.920230 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-17 00:50:41.920241 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-17 00:50:41.920251 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-17 00:50:41.920274 | orchestrator | 2026-03-17 00:50:41.920286 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-17 00:50:41.920298 | orchestrator | Tuesday 17 March 2026 00:48:58 +0000 (0:00:03.448) 0:00:37.804 ********* 2026-03-17 00:50:41.920309 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:50:41.920317 | orchestrator | changed: [testbed-manager] 2026-03-17 00:50:41.920323 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:50:41.920330 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:50:41.920337 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:50:41.920343 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:50:41.920350 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:50:41.920356 | orchestrator | 2026-03-17 00:50:41.920363 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-17 00:50:41.920369 | orchestrator | Tuesday 17 March 2026 00:49:01 +0000 (0:00:03.345) 0:00:41.150 ********* 2026-03-17 00:50:41.920376 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:50:41.920384 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:50:41.920391 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:50:41.920403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:50:41.920414 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:50:41.920421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:50:41.920433 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.920442 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.920448 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:50:41.920456 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:50:41.920472 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:50:41.920479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:50:41.920486 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.920497 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:50:41.920507 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:50:41.920514 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:50:41.920522 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.920529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:50:41.920542 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.920551 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.920563 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.920570 | orchestrator | 2026-03-17 00:50:41.920576 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-17 00:50:41.920583 | orchestrator | Tuesday 17 March 2026 00:49:03 +0000 (0:00:01.780) 0:00:42.931 ********* 2026-03-17 00:50:41.920590 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-17 00:50:41.920597 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-17 00:50:41.920603 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-17 00:50:41.920610 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-17 00:50:41.920616 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-17 00:50:41.920623 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-17 00:50:41.920630 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-17 00:50:41.920636 | orchestrator | 2026-03-17 00:50:41.920643 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-17 00:50:41.920649 | orchestrator | Tuesday 17 March 2026 00:49:05 +0000 (0:00:02.378) 0:00:45.309 ********* 2026-03-17 00:50:41.920656 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-17 00:50:41.920663 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-17 00:50:41.920669 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-17 00:50:41.920676 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-17 00:50:41.920682 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-17 00:50:41.920689 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-17 00:50:41.920695 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-17 00:50:41.920702 | orchestrator | 2026-03-17 00:50:41.920708 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-17 00:50:41.920715 | orchestrator | Tuesday 17 March 2026 00:49:08 +0000 (0:00:02.550) 0:00:47.860 ********* 2026-03-17 00:50:41.920722 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:50:41.920729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:50:41.920745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:50:41.920755 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.920762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:50:41.920769 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:50:41.920776 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:50:41.920783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.920790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.920805 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:50:41.920815 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.920823 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.920830 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.920837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.920860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.920871 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.920887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.920894 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.920904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.920911 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.920918 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:50:41.920925 | orchestrator | 2026-03-17 00:50:41.920932 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-17 00:50:41.920939 | orchestrator | Tuesday 17 March 2026 00:49:11 +0000 (0:00:03.255) 0:00:51.115 ********* 2026-03-17 00:50:41.920945 | orchestrator | changed: [testbed-manager] 2026-03-17 00:50:41.920952 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:50:41.920959 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:50:41.920965 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:50:41.920972 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:50:41.920979 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:50:41.920985 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:50:41.920992 | orchestrator | 2026-03-17 00:50:41.920999 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-17 00:50:41.921006 | orchestrator | Tuesday 17 March 2026 00:49:13 +0000 (0:00:01.454) 0:00:52.570 ********* 2026-03-17 00:50:41.921013 | orchestrator | changed: [testbed-manager] 2026-03-17 00:50:41.921019 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:50:41.921026 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:50:41.921032 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:50:41.921039 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:50:41.921045 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:50:41.921052 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:50:41.921059 | orchestrator | 2026-03-17 00:50:41.921065 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-17 00:50:41.921076 | orchestrator | Tuesday 17 March 2026 00:49:14 +0000 (0:00:01.064) 0:00:53.635 ********* 2026-03-17 00:50:41.921083 | orchestrator | 2026-03-17 00:50:41.921089 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-17 00:50:41.921096 | orchestrator | Tuesday 17 March 2026 00:49:14 +0000 (0:00:00.069) 0:00:53.705 ********* 2026-03-17 00:50:41.921102 | orchestrator | 2026-03-17 00:50:41.921109 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-17 00:50:41.921116 | orchestrator | Tuesday 17 March 2026 00:49:14 +0000 (0:00:00.063) 0:00:53.768 ********* 2026-03-17 00:50:41.921122 | orchestrator | 2026-03-17 00:50:41.921130 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-17 00:50:41.921136 | orchestrator | Tuesday 17 March 2026 00:49:14 +0000 (0:00:00.161) 0:00:53.930 ********* 2026-03-17 00:50:41.921143 | orchestrator | 2026-03-17 00:50:41.921150 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-17 00:50:41.921156 | orchestrator | Tuesday 17 March 2026 00:49:14 +0000 (0:00:00.058) 0:00:53.989 ********* 2026-03-17 00:50:41.921163 | orchestrator | 2026-03-17 00:50:41.921169 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-17 00:50:41.921176 | orchestrator | Tuesday 17 March 2026 00:49:14 +0000 (0:00:00.056) 0:00:54.045 ********* 2026-03-17 00:50:41.921182 | orchestrator | 2026-03-17 00:50:41.921189 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-17 00:50:41.921196 | orchestrator | Tuesday 17 March 2026 00:49:14 +0000 (0:00:00.059) 0:00:54.104 ********* 2026-03-17 00:50:41.921202 | orchestrator | 2026-03-17 00:50:41.921209 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-17 00:50:41.921216 | orchestrator | Tuesday 17 March 2026 00:49:14 +0000 (0:00:00.081) 0:00:54.186 ********* 2026-03-17 00:50:41.921225 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:50:41.921232 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:50:41.921239 | orchestrator | changed: [testbed-manager] 2026-03-17 00:50:41.921246 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:50:41.921252 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:50:41.921259 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:50:41.921266 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:50:41.921272 | orchestrator | 2026-03-17 00:50:41.921279 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-17 00:50:41.921286 | orchestrator | Tuesday 17 March 2026 00:49:47 +0000 (0:00:33.153) 0:01:27.340 ********* 2026-03-17 00:50:41.921292 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:50:41.921299 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:50:41.921306 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:50:41.921312 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:50:41.921319 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:50:41.921325 | orchestrator | changed: [testbed-manager] 2026-03-17 00:50:41.921335 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:50:41.921342 | orchestrator | 2026-03-17 00:50:41.921349 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-17 00:50:41.921356 | orchestrator | Tuesday 17 March 2026 00:50:27 +0000 (0:00:39.386) 0:02:06.726 ********* 2026-03-17 00:50:41.921362 | orchestrator | ok: [testbed-manager] 2026-03-17 00:50:41.921369 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:50:41.921376 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:50:41.921382 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:50:41.921389 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:50:41.921396 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:50:41.921402 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:50:41.921409 | orchestrator | 2026-03-17 00:50:41.921416 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-17 00:50:41.921422 | orchestrator | Tuesday 17 March 2026 00:50:29 +0000 (0:00:01.955) 0:02:08.682 ********* 2026-03-17 00:50:41.921429 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:50:41.921436 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:50:41.921448 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:50:41.921455 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:50:41.921462 | orchestrator | changed: [testbed-manager] 2026-03-17 00:50:41.921469 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:50:41.921475 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:50:41.921482 | orchestrator | 2026-03-17 00:50:41.921489 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:50:41.921495 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-17 00:50:41.921503 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-17 00:50:41.921510 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-17 00:50:41.921517 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-17 00:50:41.921524 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-17 00:50:41.921530 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-17 00:50:41.921537 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-17 00:50:41.921544 | orchestrator | 2026-03-17 00:50:41.921550 | orchestrator | 2026-03-17 00:50:41.921557 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:50:41.921564 | orchestrator | Tuesday 17 March 2026 00:50:38 +0000 (0:00:09.413) 0:02:18.095 ********* 2026-03-17 00:50:41.921571 | orchestrator | =============================================================================== 2026-03-17 00:50:41.921577 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 39.39s 2026-03-17 00:50:41.921584 | orchestrator | common : Restart fluentd container ------------------------------------- 33.15s 2026-03-17 00:50:41.921590 | orchestrator | common : Restart cron container ----------------------------------------- 9.41s 2026-03-17 00:50:41.921597 | orchestrator | common : Copying over config.json files for services -------------------- 6.23s 2026-03-17 00:50:41.921604 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 6.13s 2026-03-17 00:50:41.921610 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.14s 2026-03-17 00:50:41.921617 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.91s 2026-03-17 00:50:41.921623 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.45s 2026-03-17 00:50:41.921630 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.35s 2026-03-17 00:50:41.921636 | orchestrator | common : Check common containers ---------------------------------------- 3.26s 2026-03-17 00:50:41.921643 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.91s 2026-03-17 00:50:41.921649 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.55s 2026-03-17 00:50:41.921656 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.38s 2026-03-17 00:50:41.921663 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.96s 2026-03-17 00:50:41.921672 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.78s 2026-03-17 00:50:41.921679 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.71s 2026-03-17 00:50:41.921686 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.60s 2026-03-17 00:50:41.921693 | orchestrator | common : Creating log volume -------------------------------------------- 1.45s 2026-03-17 00:50:41.921703 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.35s 2026-03-17 00:50:41.921710 | orchestrator | common : include_tasks -------------------------------------------------- 1.21s 2026-03-17 00:50:41.921716 | orchestrator | 2026-03-17 00:50:41 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:50:41.921726 | orchestrator | 2026-03-17 00:50:41 | INFO  | Task 3a823b4f-b927-4599-aa84-f564a7ecc93f is in state STARTED 2026-03-17 00:50:41.921733 | orchestrator | 2026-03-17 00:50:41 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:44.941080 | orchestrator | 2026-03-17 00:50:44 | INFO  | Task f02df20b-aa5a-4622-b7bc-3de80d4b5738 is in state STARTED 2026-03-17 00:50:44.942489 | orchestrator | 2026-03-17 00:50:44 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:50:44.944593 | orchestrator | 2026-03-17 00:50:44 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:50:44.945240 | orchestrator | 2026-03-17 00:50:44 | INFO  | Task bbb7764b-1490-4fab-b4cd-d111355fd6b4 is in state STARTED 2026-03-17 00:50:44.945918 | orchestrator | 2026-03-17 00:50:44 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:50:44.946673 | orchestrator | 2026-03-17 00:50:44 | INFO  | Task 3a823b4f-b927-4599-aa84-f564a7ecc93f is in state STARTED 2026-03-17 00:50:44.946705 | orchestrator | 2026-03-17 00:50:44 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:47.995454 | orchestrator | 2026-03-17 00:50:47 | INFO  | Task f02df20b-aa5a-4622-b7bc-3de80d4b5738 is in state STARTED 2026-03-17 00:50:47.995659 | orchestrator | 2026-03-17 00:50:47 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:50:47.996192 | orchestrator | 2026-03-17 00:50:47 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:50:47.997077 | orchestrator | 2026-03-17 00:50:47 | INFO  | Task bbb7764b-1490-4fab-b4cd-d111355fd6b4 is in state STARTED 2026-03-17 00:50:48.000401 | orchestrator | 2026-03-17 00:50:48 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:50:48.000850 | orchestrator | 2026-03-17 00:50:48 | INFO  | Task 3a823b4f-b927-4599-aa84-f564a7ecc93f is in state STARTED 2026-03-17 00:50:48.000921 | orchestrator | 2026-03-17 00:50:48 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:51.044758 | orchestrator | 2026-03-17 00:50:51 | INFO  | Task f02df20b-aa5a-4622-b7bc-3de80d4b5738 is in state STARTED 2026-03-17 00:50:51.044810 | orchestrator | 2026-03-17 00:50:51 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:50:51.045527 | orchestrator | 2026-03-17 00:50:51 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:50:51.045819 | orchestrator | 2026-03-17 00:50:51 | INFO  | Task bbb7764b-1490-4fab-b4cd-d111355fd6b4 is in state STARTED 2026-03-17 00:50:51.046553 | orchestrator | 2026-03-17 00:50:51 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:50:51.047297 | orchestrator | 2026-03-17 00:50:51 | INFO  | Task 3a823b4f-b927-4599-aa84-f564a7ecc93f is in state STARTED 2026-03-17 00:50:51.047328 | orchestrator | 2026-03-17 00:50:51 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:54.080974 | orchestrator | 2026-03-17 00:50:54 | INFO  | Task f02df20b-aa5a-4622-b7bc-3de80d4b5738 is in state STARTED 2026-03-17 00:50:54.081655 | orchestrator | 2026-03-17 00:50:54 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:50:54.082457 | orchestrator | 2026-03-17 00:50:54 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:50:54.086331 | orchestrator | 2026-03-17 00:50:54 | INFO  | Task bbb7764b-1490-4fab-b4cd-d111355fd6b4 is in state STARTED 2026-03-17 00:50:54.086827 | orchestrator | 2026-03-17 00:50:54 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:50:54.087908 | orchestrator | 2026-03-17 00:50:54 | INFO  | Task 3a823b4f-b927-4599-aa84-f564a7ecc93f is in state STARTED 2026-03-17 00:50:54.087936 | orchestrator | 2026-03-17 00:50:54 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:50:57.114271 | orchestrator | 2026-03-17 00:50:57 | INFO  | Task f02df20b-aa5a-4622-b7bc-3de80d4b5738 is in state STARTED 2026-03-17 00:50:57.114353 | orchestrator | 2026-03-17 00:50:57 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:50:57.114942 | orchestrator | 2026-03-17 00:50:57 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:50:57.118245 | orchestrator | 2026-03-17 00:50:57 | INFO  | Task bbb7764b-1490-4fab-b4cd-d111355fd6b4 is in state STARTED 2026-03-17 00:50:57.118363 | orchestrator | 2026-03-17 00:50:57 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:50:57.119188 | orchestrator | 2026-03-17 00:50:57 | INFO  | Task 3a823b4f-b927-4599-aa84-f564a7ecc93f is in state STARTED 2026-03-17 00:50:57.119234 | orchestrator | 2026-03-17 00:50:57 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:00.150280 | orchestrator | 2026-03-17 00:51:00 | INFO  | Task f02df20b-aa5a-4622-b7bc-3de80d4b5738 is in state STARTED 2026-03-17 00:51:00.150522 | orchestrator | 2026-03-17 00:51:00 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:51:00.151205 | orchestrator | 2026-03-17 00:51:00 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:51:00.151755 | orchestrator | 2026-03-17 00:51:00 | INFO  | Task bbb7764b-1490-4fab-b4cd-d111355fd6b4 is in state SUCCESS 2026-03-17 00:51:00.152475 | orchestrator | 2026-03-17 00:51:00 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:51:00.153230 | orchestrator | 2026-03-17 00:51:00 | INFO  | Task 3a823b4f-b927-4599-aa84-f564a7ecc93f is in state STARTED 2026-03-17 00:51:00.153268 | orchestrator | 2026-03-17 00:51:00 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:03.224626 | orchestrator | 2026-03-17 00:51:03 | INFO  | Task f02df20b-aa5a-4622-b7bc-3de80d4b5738 is in state STARTED 2026-03-17 00:51:03.225113 | orchestrator | 2026-03-17 00:51:03 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:51:03.227215 | orchestrator | 2026-03-17 00:51:03 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:51:03.228708 | orchestrator | 2026-03-17 00:51:03 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:51:03.229323 | orchestrator | 2026-03-17 00:51:03 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:51:03.229755 | orchestrator | 2026-03-17 00:51:03 | INFO  | Task 3a823b4f-b927-4599-aa84-f564a7ecc93f is in state STARTED 2026-03-17 00:51:03.229794 | orchestrator | 2026-03-17 00:51:03 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:06.279227 | orchestrator | 2026-03-17 00:51:06 | INFO  | Task f02df20b-aa5a-4622-b7bc-3de80d4b5738 is in state STARTED 2026-03-17 00:51:06.282381 | orchestrator | 2026-03-17 00:51:06 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:51:06.286068 | orchestrator | 2026-03-17 00:51:06 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:51:06.287665 | orchestrator | 2026-03-17 00:51:06 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:51:06.289146 | orchestrator | 2026-03-17 00:51:06 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:51:06.289993 | orchestrator | 2026-03-17 00:51:06 | INFO  | Task 3a823b4f-b927-4599-aa84-f564a7ecc93f is in state STARTED 2026-03-17 00:51:06.290110 | orchestrator | 2026-03-17 00:51:06 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:09.339349 | orchestrator | 2026-03-17 00:51:09.339428 | orchestrator | 2026-03-17 00:51:09.339440 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 00:51:09.339448 | orchestrator | 2026-03-17 00:51:09.339457 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 00:51:09.339464 | orchestrator | Tuesday 17 March 2026 00:50:43 +0000 (0:00:00.289) 0:00:00.289 ********* 2026-03-17 00:51:09.339472 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:51:09.339480 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:51:09.339487 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:51:09.339494 | orchestrator | 2026-03-17 00:51:09.339502 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 00:51:09.339509 | orchestrator | Tuesday 17 March 2026 00:50:44 +0000 (0:00:00.409) 0:00:00.698 ********* 2026-03-17 00:51:09.339517 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-17 00:51:09.339525 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-17 00:51:09.339533 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-17 00:51:09.339540 | orchestrator | 2026-03-17 00:51:09.339548 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-17 00:51:09.339554 | orchestrator | 2026-03-17 00:51:09.339562 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-17 00:51:09.339569 | orchestrator | Tuesday 17 March 2026 00:50:44 +0000 (0:00:00.414) 0:00:01.113 ********* 2026-03-17 00:51:09.339577 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:51:09.339585 | orchestrator | 2026-03-17 00:51:09.339593 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-17 00:51:09.339600 | orchestrator | Tuesday 17 March 2026 00:50:45 +0000 (0:00:00.888) 0:00:02.001 ********* 2026-03-17 00:51:09.339607 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-17 00:51:09.339614 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-17 00:51:09.339621 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-17 00:51:09.339628 | orchestrator | 2026-03-17 00:51:09.339635 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-17 00:51:09.339652 | orchestrator | Tuesday 17 March 2026 00:50:46 +0000 (0:00:01.009) 0:00:03.010 ********* 2026-03-17 00:51:09.339659 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-17 00:51:09.339666 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-17 00:51:09.339673 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-17 00:51:09.339680 | orchestrator | 2026-03-17 00:51:09.339688 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-17 00:51:09.339694 | orchestrator | Tuesday 17 March 2026 00:50:48 +0000 (0:00:02.229) 0:00:05.239 ********* 2026-03-17 00:51:09.339701 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:51:09.339708 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:51:09.339716 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:51:09.339722 | orchestrator | 2026-03-17 00:51:09.339729 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-17 00:51:09.339736 | orchestrator | Tuesday 17 March 2026 00:50:50 +0000 (0:00:02.198) 0:00:07.438 ********* 2026-03-17 00:51:09.339743 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:51:09.339763 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:51:09.339771 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:51:09.339778 | orchestrator | 2026-03-17 00:51:09.339785 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:51:09.339792 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:51:09.339800 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:51:09.339807 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:51:09.339814 | orchestrator | 2026-03-17 00:51:09.339822 | orchestrator | 2026-03-17 00:51:09.339829 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:51:09.339836 | orchestrator | Tuesday 17 March 2026 00:50:58 +0000 (0:00:07.191) 0:00:14.629 ********* 2026-03-17 00:51:09.339843 | orchestrator | =============================================================================== 2026-03-17 00:51:09.339851 | orchestrator | memcached : Restart memcached container --------------------------------- 7.19s 2026-03-17 00:51:09.339858 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.23s 2026-03-17 00:51:09.339865 | orchestrator | memcached : Check memcached container ----------------------------------- 2.20s 2026-03-17 00:51:09.339872 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.01s 2026-03-17 00:51:09.339879 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.89s 2026-03-17 00:51:09.339886 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2026-03-17 00:51:09.339937 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.41s 2026-03-17 00:51:09.339947 | orchestrator | 2026-03-17 00:51:09.339953 | orchestrator | 2026-03-17 00:51:09.339959 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 00:51:09.339966 | orchestrator | 2026-03-17 00:51:09.339972 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 00:51:09.339979 | orchestrator | Tuesday 17 March 2026 00:50:43 +0000 (0:00:00.263) 0:00:00.263 ********* 2026-03-17 00:51:09.339985 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:51:09.339991 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:51:09.339997 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:51:09.340003 | orchestrator | 2026-03-17 00:51:09.340010 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 00:51:09.340027 | orchestrator | Tuesday 17 March 2026 00:50:43 +0000 (0:00:00.338) 0:00:00.601 ********* 2026-03-17 00:51:09.340033 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-17 00:51:09.340039 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-17 00:51:09.340045 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-17 00:51:09.340052 | orchestrator | 2026-03-17 00:51:09.340058 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-17 00:51:09.340065 | orchestrator | 2026-03-17 00:51:09.340072 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-17 00:51:09.340078 | orchestrator | Tuesday 17 March 2026 00:50:44 +0000 (0:00:00.572) 0:00:01.174 ********* 2026-03-17 00:51:09.340085 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:51:09.340093 | orchestrator | 2026-03-17 00:51:09.340100 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-17 00:51:09.340107 | orchestrator | Tuesday 17 March 2026 00:50:44 +0000 (0:00:00.551) 0:00:01.726 ********* 2026-03-17 00:51:09.340116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:51:09.340134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:51:09.340141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:51:09.340154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:51:09.340162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:51:09.340176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:51:09.340184 | orchestrator | 2026-03-17 00:51:09.340191 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-17 00:51:09.340198 | orchestrator | Tuesday 17 March 2026 00:50:46 +0000 (0:00:01.357) 0:00:03.083 ********* 2026-03-17 00:51:09.340205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:51:09.340227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:51:09.340235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:51:09.340242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:51:09.340249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:51:09.340266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:51:09.340273 | orchestrator | 2026-03-17 00:51:09.340280 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-17 00:51:09.340287 | orchestrator | Tuesday 17 March 2026 00:50:49 +0000 (0:00:03.613) 0:00:06.697 ********* 2026-03-17 00:51:09.340298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:51:09.340308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:51:09.340315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:51:09.340321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:51:09.340327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:51:09.340338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:51:09.340345 | orchestrator | 2026-03-17 00:51:09.340355 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-17 00:51:09.340361 | orchestrator | Tuesday 17 March 2026 00:50:52 +0000 (0:00:02.496) 0:00:09.194 ********* 2026-03-17 00:51:09.340368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:51:09.340374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:51:09.340383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:51:09.340390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:51:09.340396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:51:09.340407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:51:09.340417 | orchestrator | 2026-03-17 00:51:09.340424 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-17 00:51:09.340430 | orchestrator | Tuesday 17 March 2026 00:50:53 +0000 (0:00:01.651) 0:00:10.845 ********* 2026-03-17 00:51:09.340436 | orchestrator | 2026-03-17 00:51:09.340443 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-17 00:51:09.340449 | orchestrator | Tuesday 17 March 2026 00:50:53 +0000 (0:00:00.077) 0:00:10.923 ********* 2026-03-17 00:51:09.340455 | orchestrator | 2026-03-17 00:51:09.340461 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-17 00:51:09.340467 | orchestrator | Tuesday 17 March 2026 00:50:53 +0000 (0:00:00.056) 0:00:10.979 ********* 2026-03-17 00:51:09.340473 | orchestrator | 2026-03-17 00:51:09.340480 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-17 00:51:09.340486 | orchestrator | Tuesday 17 March 2026 00:50:54 +0000 (0:00:00.103) 0:00:11.082 ********* 2026-03-17 00:51:09.340492 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:51:09.340499 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:51:09.340505 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:51:09.340512 | orchestrator | 2026-03-17 00:51:09.340519 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-17 00:51:09.340526 | orchestrator | Tuesday 17 March 2026 00:50:57 +0000 (0:00:03.489) 0:00:14.572 ********* 2026-03-17 00:51:09.340532 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:51:09.340539 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:51:09.340546 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:51:09.340553 | orchestrator | 2026-03-17 00:51:09.340560 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:51:09.340567 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:51:09.340577 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:51:09.340584 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:51:09.340591 | orchestrator | 2026-03-17 00:51:09.340598 | orchestrator | 2026-03-17 00:51:09.340605 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:51:09.340612 | orchestrator | Tuesday 17 March 2026 00:51:06 +0000 (0:00:08.755) 0:00:23.327 ********* 2026-03-17 00:51:09.340619 | orchestrator | =============================================================================== 2026-03-17 00:51:09.340625 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.76s 2026-03-17 00:51:09.340632 | orchestrator | redis : Copying over default config.json files -------------------------- 3.61s 2026-03-17 00:51:09.340639 | orchestrator | redis : Restart redis container ----------------------------------------- 3.49s 2026-03-17 00:51:09.340646 | orchestrator | redis : Copying over redis config files --------------------------------- 2.50s 2026-03-17 00:51:09.340653 | orchestrator | redis : Check redis containers ------------------------------------------ 1.65s 2026-03-17 00:51:09.340660 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.36s 2026-03-17 00:51:09.340667 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.57s 2026-03-17 00:51:09.340674 | orchestrator | redis : include_tasks --------------------------------------------------- 0.55s 2026-03-17 00:51:09.340681 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-03-17 00:51:09.340688 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.24s 2026-03-17 00:51:09.340695 | orchestrator | 2026-03-17 00:51:09 | INFO  | Task f02df20b-aa5a-4622-b7bc-3de80d4b5738 is in state SUCCESS 2026-03-17 00:51:09.347036 | orchestrator | 2026-03-17 00:51:09 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:51:09.347088 | orchestrator | 2026-03-17 00:51:09 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:51:09.347096 | orchestrator | 2026-03-17 00:51:09 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:51:09.347103 | orchestrator | 2026-03-17 00:51:09 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:51:09.347110 | orchestrator | 2026-03-17 00:51:09 | INFO  | Task 3a823b4f-b927-4599-aa84-f564a7ecc93f is in state STARTED 2026-03-17 00:51:09.347118 | orchestrator | 2026-03-17 00:51:09 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:12.368989 | orchestrator | 2026-03-17 00:51:12 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:51:12.369249 | orchestrator | 2026-03-17 00:51:12 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:51:12.369966 | orchestrator | 2026-03-17 00:51:12 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:51:12.370805 | orchestrator | 2026-03-17 00:51:12 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:51:12.371527 | orchestrator | 2026-03-17 00:51:12 | INFO  | Task 3a823b4f-b927-4599-aa84-f564a7ecc93f is in state STARTED 2026-03-17 00:51:12.371581 | orchestrator | 2026-03-17 00:51:12 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:15.427196 | orchestrator | 2026-03-17 00:51:15 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:51:15.427596 | orchestrator | 2026-03-17 00:51:15 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:51:15.428585 | orchestrator | 2026-03-17 00:51:15 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:51:15.429194 | orchestrator | 2026-03-17 00:51:15 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:51:15.430213 | orchestrator | 2026-03-17 00:51:15 | INFO  | Task 3a823b4f-b927-4599-aa84-f564a7ecc93f is in state STARTED 2026-03-17 00:51:15.430267 | orchestrator | 2026-03-17 00:51:15 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:18.466261 | orchestrator | 2026-03-17 00:51:18 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:51:18.467610 | orchestrator | 2026-03-17 00:51:18 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:51:18.469574 | orchestrator | 2026-03-17 00:51:18 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:51:18.469632 | orchestrator | 2026-03-17 00:51:18 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:51:18.471938 | orchestrator | 2026-03-17 00:51:18 | INFO  | Task 3a823b4f-b927-4599-aa84-f564a7ecc93f is in state STARTED 2026-03-17 00:51:18.472002 | orchestrator | 2026-03-17 00:51:18 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:21.496218 | orchestrator | 2026-03-17 00:51:21 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:51:21.496622 | orchestrator | 2026-03-17 00:51:21 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:51:21.497383 | orchestrator | 2026-03-17 00:51:21 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:51:21.497968 | orchestrator | 2026-03-17 00:51:21 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:51:21.498744 | orchestrator | 2026-03-17 00:51:21 | INFO  | Task 3a823b4f-b927-4599-aa84-f564a7ecc93f is in state STARTED 2026-03-17 00:51:21.498793 | orchestrator | 2026-03-17 00:51:21 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:24.529430 | orchestrator | 2026-03-17 00:51:24 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:51:24.530847 | orchestrator | 2026-03-17 00:51:24 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:51:24.532912 | orchestrator | 2026-03-17 00:51:24 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:51:24.533885 | orchestrator | 2026-03-17 00:51:24 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:51:24.534553 | orchestrator | 2026-03-17 00:51:24 | INFO  | Task 3a823b4f-b927-4599-aa84-f564a7ecc93f is in state STARTED 2026-03-17 00:51:24.534867 | orchestrator | 2026-03-17 00:51:24 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:27.586827 | orchestrator | 2026-03-17 00:51:27 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:51:27.586972 | orchestrator | 2026-03-17 00:51:27 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:51:27.587591 | orchestrator | 2026-03-17 00:51:27 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:51:27.587939 | orchestrator | 2026-03-17 00:51:27 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:51:27.591203 | orchestrator | 2026-03-17 00:51:27 | INFO  | Task 3a823b4f-b927-4599-aa84-f564a7ecc93f is in state STARTED 2026-03-17 00:51:27.591274 | orchestrator | 2026-03-17 00:51:27 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:30.652219 | orchestrator | 2026-03-17 00:51:30 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:51:30.652270 | orchestrator | 2026-03-17 00:51:30 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:51:30.652275 | orchestrator | 2026-03-17 00:51:30 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:51:30.652279 | orchestrator | 2026-03-17 00:51:30 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:51:30.652283 | orchestrator | 2026-03-17 00:51:30 | INFO  | Task 3a823b4f-b927-4599-aa84-f564a7ecc93f is in state STARTED 2026-03-17 00:51:30.652287 | orchestrator | 2026-03-17 00:51:30 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:33.678987 | orchestrator | 2026-03-17 00:51:33 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:51:33.680986 | orchestrator | 2026-03-17 00:51:33 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:51:33.681642 | orchestrator | 2026-03-17 00:51:33 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:51:33.682733 | orchestrator | 2026-03-17 00:51:33 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:51:33.683761 | orchestrator | 2026-03-17 00:51:33 | INFO  | Task 3a823b4f-b927-4599-aa84-f564a7ecc93f is in state STARTED 2026-03-17 00:51:33.683789 | orchestrator | 2026-03-17 00:51:33 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:36.734452 | orchestrator | 2026-03-17 00:51:36 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:51:36.735195 | orchestrator | 2026-03-17 00:51:36 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:51:36.735435 | orchestrator | 2026-03-17 00:51:36 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:51:36.737079 | orchestrator | 2026-03-17 00:51:36 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:51:36.738319 | orchestrator | 2026-03-17 00:51:36 | INFO  | Task 3a823b4f-b927-4599-aa84-f564a7ecc93f is in state STARTED 2026-03-17 00:51:36.738380 | orchestrator | 2026-03-17 00:51:36 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:39.785060 | orchestrator | 2026-03-17 00:51:39 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:51:39.785172 | orchestrator | 2026-03-17 00:51:39 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:51:39.785197 | orchestrator | 2026-03-17 00:51:39 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:51:39.786562 | orchestrator | 2026-03-17 00:51:39 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:51:39.786864 | orchestrator | 2026-03-17 00:51:39 | INFO  | Task 3a823b4f-b927-4599-aa84-f564a7ecc93f is in state STARTED 2026-03-17 00:51:39.786889 | orchestrator | 2026-03-17 00:51:39 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:42.814626 | orchestrator | 2026-03-17 00:51:42 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:51:42.816121 | orchestrator | 2026-03-17 00:51:42 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:51:42.816173 | orchestrator | 2026-03-17 00:51:42 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:51:42.816180 | orchestrator | 2026-03-17 00:51:42 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:51:42.816651 | orchestrator | 2026-03-17 00:51:42 | INFO  | Task 3a823b4f-b927-4599-aa84-f564a7ecc93f is in state STARTED 2026-03-17 00:51:42.816676 | orchestrator | 2026-03-17 00:51:42 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:45.849830 | orchestrator | 2026-03-17 00:51:45 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:51:45.850500 | orchestrator | 2026-03-17 00:51:45 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:51:45.850797 | orchestrator | 2026-03-17 00:51:45 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:51:45.851566 | orchestrator | 2026-03-17 00:51:45 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:51:45.852231 | orchestrator | 2026-03-17 00:51:45 | INFO  | Task 3a823b4f-b927-4599-aa84-f564a7ecc93f is in state STARTED 2026-03-17 00:51:45.852266 | orchestrator | 2026-03-17 00:51:45 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:48.916452 | orchestrator | 2026-03-17 00:51:48 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:51:48.918432 | orchestrator | 2026-03-17 00:51:48 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:51:48.918885 | orchestrator | 2026-03-17 00:51:48 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:51:48.921056 | orchestrator | 2026-03-17 00:51:48 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:51:48.923363 | orchestrator | 2026-03-17 00:51:48 | INFO  | Task 3a823b4f-b927-4599-aa84-f564a7ecc93f is in state SUCCESS 2026-03-17 00:51:48.924453 | orchestrator | 2026-03-17 00:51:48.924483 | orchestrator | 2026-03-17 00:51:48.924488 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 00:51:48.924493 | orchestrator | 2026-03-17 00:51:48.924497 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 00:51:48.924509 | orchestrator | Tuesday 17 March 2026 00:50:43 +0000 (0:00:00.277) 0:00:00.277 ********* 2026-03-17 00:51:48.924513 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:51:48.924518 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:51:48.924522 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:51:48.924526 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:51:48.924530 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:51:48.924534 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:51:48.924537 | orchestrator | 2026-03-17 00:51:48.924541 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 00:51:48.924545 | orchestrator | Tuesday 17 March 2026 00:50:44 +0000 (0:00:00.756) 0:00:01.033 ********* 2026-03-17 00:51:48.924549 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-17 00:51:48.924553 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-17 00:51:48.924557 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-17 00:51:48.924560 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-17 00:51:48.924564 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-17 00:51:48.924568 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-17 00:51:48.924572 | orchestrator | 2026-03-17 00:51:48.924575 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-17 00:51:48.924579 | orchestrator | 2026-03-17 00:51:48.924583 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-17 00:51:48.924591 | orchestrator | Tuesday 17 March 2026 00:50:45 +0000 (0:00:00.713) 0:00:01.747 ********* 2026-03-17 00:51:48.924595 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:51:48.924600 | orchestrator | 2026-03-17 00:51:48.924603 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-17 00:51:48.924607 | orchestrator | Tuesday 17 March 2026 00:50:46 +0000 (0:00:01.532) 0:00:03.279 ********* 2026-03-17 00:51:48.924611 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-17 00:51:48.924615 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-17 00:51:48.924619 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-17 00:51:48.924623 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-17 00:51:48.924631 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-17 00:51:48.924635 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-17 00:51:48.924639 | orchestrator | 2026-03-17 00:51:48.924643 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-17 00:51:48.924647 | orchestrator | Tuesday 17 March 2026 00:50:48 +0000 (0:00:01.721) 0:00:05.001 ********* 2026-03-17 00:51:48.924650 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-17 00:51:48.924654 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-17 00:51:48.924658 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-17 00:51:48.924662 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-17 00:51:48.924665 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-17 00:51:48.924669 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-17 00:51:48.924673 | orchestrator | 2026-03-17 00:51:48.924677 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-17 00:51:48.924681 | orchestrator | Tuesday 17 March 2026 00:50:50 +0000 (0:00:01.871) 0:00:06.872 ********* 2026-03-17 00:51:48.924685 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-17 00:51:48.924689 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:51:48.924693 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-17 00:51:48.924699 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:51:48.924703 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-17 00:51:48.924706 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:51:48.924710 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-17 00:51:48.924714 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:51:48.924718 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-17 00:51:48.924722 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:51:48.924727 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-17 00:51:48.924733 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:51:48.924740 | orchestrator | 2026-03-17 00:51:48.924746 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-17 00:51:48.924753 | orchestrator | Tuesday 17 March 2026 00:50:51 +0000 (0:00:01.085) 0:00:07.958 ********* 2026-03-17 00:51:48.924760 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:51:48.924766 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:51:48.924773 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:51:48.924780 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:51:48.924787 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:51:48.924794 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:51:48.924798 | orchestrator | 2026-03-17 00:51:48.924801 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-17 00:51:48.924805 | orchestrator | Tuesday 17 March 2026 00:50:52 +0000 (0:00:00.632) 0:00:08.591 ********* 2026-03-17 00:51:48.924818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:51:48.924868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:51:48.924879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:51:48.924883 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:51:48.924890 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:51:48.924898 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:51:48.924902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:51:48.924908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:51:48.924912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:51:48.924918 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:51:48.924922 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:51:48.924928 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:51:48.924932 | orchestrator | 2026-03-17 00:51:48.924936 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-17 00:51:48.924940 | orchestrator | Tuesday 17 March 2026 00:50:53 +0000 (0:00:01.629) 0:00:10.220 ********* 2026-03-17 00:51:48.924944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:51:48.924948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:51:48.924968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:51:48.924975 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:51:48.924979 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:51:48.924986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:51:48.924992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:51:48.924996 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:51:48.925003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:51:48.925007 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:51:48.925013 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:51:48.925017 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:51:48.925021 | orchestrator | 2026-03-17 00:51:48.925025 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-17 00:51:48.925029 | orchestrator | Tuesday 17 March 2026 00:50:56 +0000 (0:00:02.987) 0:00:13.208 ********* 2026-03-17 00:51:48.925032 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:51:48.925036 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:51:48.925040 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:51:48.925044 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:51:48.925047 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:51:48.925051 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:51:48.925057 | orchestrator | 2026-03-17 00:51:48.925061 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-17 00:51:48.925067 | orchestrator | Tuesday 17 March 2026 00:50:57 +0000 (0:00:01.177) 0:00:14.386 ********* 2026-03-17 00:51:48.925071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:51:48.925075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:51:48.925079 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:51:48.925086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:51:48.925090 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:51:48.925095 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:51:48.925104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:51:48.925108 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:51:48.925112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:51:48.925120 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:51:48.925165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:51:48.925176 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:51:48.925180 | orchestrator | 2026-03-17 00:51:48.925184 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-17 00:51:48.925188 | orchestrator | Tuesday 17 March 2026 00:51:01 +0000 (0:00:03.355) 0:00:17.741 ********* 2026-03-17 00:51:48.925192 | orchestrator | 2026-03-17 00:51:48.925195 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-17 00:51:48.925199 | orchestrator | Tuesday 17 March 2026 00:51:01 +0000 (0:00:00.262) 0:00:18.003 ********* 2026-03-17 00:51:48.925203 | orchestrator | 2026-03-17 00:51:48.925206 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-17 00:51:48.925210 | orchestrator | Tuesday 17 March 2026 00:51:01 +0000 (0:00:00.102) 0:00:18.106 ********* 2026-03-17 00:51:48.925214 | orchestrator | 2026-03-17 00:51:48.925218 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-17 00:51:48.925221 | orchestrator | Tuesday 17 March 2026 00:51:01 +0000 (0:00:00.117) 0:00:18.224 ********* 2026-03-17 00:51:48.925225 | orchestrator | 2026-03-17 00:51:48.925228 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-17 00:51:48.925232 | orchestrator | Tuesday 17 March 2026 00:51:01 +0000 (0:00:00.099) 0:00:18.323 ********* 2026-03-17 00:51:48.925236 | orchestrator | 2026-03-17 00:51:48.925240 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-17 00:51:48.925243 | orchestrator | Tuesday 17 March 2026 00:51:01 +0000 (0:00:00.097) 0:00:18.421 ********* 2026-03-17 00:51:48.925247 | orchestrator | 2026-03-17 00:51:48.925251 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-17 00:51:48.925254 | orchestrator | Tuesday 17 March 2026 00:51:02 +0000 (0:00:00.098) 0:00:18.519 ********* 2026-03-17 00:51:48.925258 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:51:48.925262 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:51:48.925266 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:51:48.925269 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:51:48.925273 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:51:48.925277 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:51:48.925280 | orchestrator | 2026-03-17 00:51:48.925284 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-17 00:51:48.925288 | orchestrator | Tuesday 17 March 2026 00:51:11 +0000 (0:00:09.516) 0:00:28.036 ********* 2026-03-17 00:51:48.925292 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:51:48.925295 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:51:48.925299 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:51:48.925303 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:51:48.925306 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:51:48.925310 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:51:48.925314 | orchestrator | 2026-03-17 00:51:48.925317 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-17 00:51:48.925321 | orchestrator | Tuesday 17 March 2026 00:51:13 +0000 (0:00:01.623) 0:00:29.660 ********* 2026-03-17 00:51:48.925325 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:51:48.925333 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:51:48.925336 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:51:48.925340 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:51:48.925344 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:51:48.925347 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:51:48.925351 | orchestrator | 2026-03-17 00:51:48.925355 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-17 00:51:48.925359 | orchestrator | Tuesday 17 March 2026 00:51:23 +0000 (0:00:10.646) 0:00:40.306 ********* 2026-03-17 00:51:48.925366 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-17 00:51:48.925369 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-17 00:51:48.925373 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-17 00:51:48.925377 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-17 00:51:48.925381 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-17 00:51:48.925384 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-17 00:51:48.925388 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-17 00:51:48.925392 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-17 00:51:48.925396 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-17 00:51:48.925399 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-17 00:51:48.925403 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-17 00:51:48.925409 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-17 00:51:48.925413 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-17 00:51:48.925416 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-17 00:51:48.925420 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-17 00:51:48.925424 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-17 00:51:48.925427 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-17 00:51:48.925431 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-17 00:51:48.925435 | orchestrator | 2026-03-17 00:51:48.925439 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-17 00:51:48.925442 | orchestrator | Tuesday 17 March 2026 00:51:31 +0000 (0:00:07.917) 0:00:48.224 ********* 2026-03-17 00:51:48.925446 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-17 00:51:48.925450 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:51:48.925453 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-17 00:51:48.925457 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:51:48.925461 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-17 00:51:48.925465 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:51:48.925468 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-17 00:51:48.925472 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-17 00:51:48.925479 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-17 00:51:48.925483 | orchestrator | 2026-03-17 00:51:48.925487 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-17 00:51:48.925490 | orchestrator | Tuesday 17 March 2026 00:51:34 +0000 (0:00:02.538) 0:00:50.763 ********* 2026-03-17 00:51:48.925494 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-17 00:51:48.925498 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:51:48.925502 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-17 00:51:48.925505 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:51:48.925509 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-17 00:51:48.925513 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:51:48.925517 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-17 00:51:48.925520 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-17 00:51:48.925524 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-17 00:51:48.925528 | orchestrator | 2026-03-17 00:51:48.925531 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-17 00:51:48.925535 | orchestrator | Tuesday 17 March 2026 00:51:37 +0000 (0:00:03.165) 0:00:53.929 ********* 2026-03-17 00:51:48.925539 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:51:48.925543 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:51:48.925546 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:51:48.925550 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:51:48.925554 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:51:48.925557 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:51:48.925561 | orchestrator | 2026-03-17 00:51:48.925565 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:51:48.925569 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-17 00:51:48.925575 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-17 00:51:48.925579 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-17 00:51:48.925583 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-17 00:51:48.925587 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-17 00:51:48.925591 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-17 00:51:48.925594 | orchestrator | 2026-03-17 00:51:48.925598 | orchestrator | 2026-03-17 00:51:48.925602 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:51:48.925606 | orchestrator | Tuesday 17 March 2026 00:51:45 +0000 (0:00:08.086) 0:01:02.015 ********* 2026-03-17 00:51:48.925609 | orchestrator | =============================================================================== 2026-03-17 00:51:48.925613 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.73s 2026-03-17 00:51:48.925617 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.52s 2026-03-17 00:51:48.925621 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.92s 2026-03-17 00:51:48.925624 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.36s 2026-03-17 00:51:48.925630 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.17s 2026-03-17 00:51:48.925634 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.99s 2026-03-17 00:51:48.925640 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.54s 2026-03-17 00:51:48.925644 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.87s 2026-03-17 00:51:48.925647 | orchestrator | module-load : Load modules ---------------------------------------------- 1.72s 2026-03-17 00:51:48.925651 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.63s 2026-03-17 00:51:48.925655 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.62s 2026-03-17 00:51:48.925658 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.53s 2026-03-17 00:51:48.925662 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.18s 2026-03-17 00:51:48.925666 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.09s 2026-03-17 00:51:48.925670 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 0.78s 2026-03-17 00:51:48.925673 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.76s 2026-03-17 00:51:48.925677 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.71s 2026-03-17 00:51:48.925681 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.63s 2026-03-17 00:51:48.925685 | orchestrator | 2026-03-17 00:51:48 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:51:48.925688 | orchestrator | 2026-03-17 00:51:48 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:51.965091 | orchestrator | 2026-03-17 00:51:51 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:51:51.965487 | orchestrator | 2026-03-17 00:51:51 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:51:51.966233 | orchestrator | 2026-03-17 00:51:51 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:51:51.966905 | orchestrator | 2026-03-17 00:51:51 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:51:51.968753 | orchestrator | 2026-03-17 00:51:51 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:51:51.968777 | orchestrator | 2026-03-17 00:51:51 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:54.997510 | orchestrator | 2026-03-17 00:51:54 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:51:54.997665 | orchestrator | 2026-03-17 00:51:54 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:51:54.998147 | orchestrator | 2026-03-17 00:51:54 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:51:54.998938 | orchestrator | 2026-03-17 00:51:54 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:51:54.999616 | orchestrator | 2026-03-17 00:51:55 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:51:55.002444 | orchestrator | 2026-03-17 00:51:55 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:51:58.034368 | orchestrator | 2026-03-17 00:51:58 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:51:58.035069 | orchestrator | 2026-03-17 00:51:58 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:51:58.036396 | orchestrator | 2026-03-17 00:51:58 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:51:58.037184 | orchestrator | 2026-03-17 00:51:58 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:51:58.038164 | orchestrator | 2026-03-17 00:51:58 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:51:58.038357 | orchestrator | 2026-03-17 00:51:58 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:01.083943 | orchestrator | 2026-03-17 00:52:01 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:52:01.084586 | orchestrator | 2026-03-17 00:52:01 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:52:01.086198 | orchestrator | 2026-03-17 00:52:01 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:52:01.088609 | orchestrator | 2026-03-17 00:52:01 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:52:01.089410 | orchestrator | 2026-03-17 00:52:01 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:52:01.089443 | orchestrator | 2026-03-17 00:52:01 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:04.115094 | orchestrator | 2026-03-17 00:52:04 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:52:04.115763 | orchestrator | 2026-03-17 00:52:04 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:52:04.116357 | orchestrator | 2026-03-17 00:52:04 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:52:04.117101 | orchestrator | 2026-03-17 00:52:04 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:52:04.117818 | orchestrator | 2026-03-17 00:52:04 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:52:04.118052 | orchestrator | 2026-03-17 00:52:04 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:07.140414 | orchestrator | 2026-03-17 00:52:07 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:52:07.140723 | orchestrator | 2026-03-17 00:52:07 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:52:07.142559 | orchestrator | 2026-03-17 00:52:07 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:52:07.143097 | orchestrator | 2026-03-17 00:52:07 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:52:07.144300 | orchestrator | 2026-03-17 00:52:07 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:52:07.144345 | orchestrator | 2026-03-17 00:52:07 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:10.169676 | orchestrator | 2026-03-17 00:52:10 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:52:10.170108 | orchestrator | 2026-03-17 00:52:10 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:52:10.170653 | orchestrator | 2026-03-17 00:52:10 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:52:10.171310 | orchestrator | 2026-03-17 00:52:10 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:52:10.172143 | orchestrator | 2026-03-17 00:52:10 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:52:10.172177 | orchestrator | 2026-03-17 00:52:10 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:13.218628 | orchestrator | 2026-03-17 00:52:13 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:52:13.221245 | orchestrator | 2026-03-17 00:52:13 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:52:13.223507 | orchestrator | 2026-03-17 00:52:13 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:52:13.225414 | orchestrator | 2026-03-17 00:52:13 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:52:13.227407 | orchestrator | 2026-03-17 00:52:13 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:52:13.228848 | orchestrator | 2026-03-17 00:52:13 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:16.259785 | orchestrator | 2026-03-17 00:52:16 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:52:16.261769 | orchestrator | 2026-03-17 00:52:16 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:52:16.263255 | orchestrator | 2026-03-17 00:52:16 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:52:16.264495 | orchestrator | 2026-03-17 00:52:16 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:52:16.265890 | orchestrator | 2026-03-17 00:52:16 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:52:16.266140 | orchestrator | 2026-03-17 00:52:16 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:19.309855 | orchestrator | 2026-03-17 00:52:19 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:52:19.310614 | orchestrator | 2026-03-17 00:52:19 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:52:19.312845 | orchestrator | 2026-03-17 00:52:19 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:52:19.314667 | orchestrator | 2026-03-17 00:52:19 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:52:19.316180 | orchestrator | 2026-03-17 00:52:19 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:52:19.316241 | orchestrator | 2026-03-17 00:52:19 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:22.356160 | orchestrator | 2026-03-17 00:52:22 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:52:22.356856 | orchestrator | 2026-03-17 00:52:22 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:52:22.358150 | orchestrator | 2026-03-17 00:52:22 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:52:22.358836 | orchestrator | 2026-03-17 00:52:22 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:52:22.359616 | orchestrator | 2026-03-17 00:52:22 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:52:22.359780 | orchestrator | 2026-03-17 00:52:22 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:25.386067 | orchestrator | 2026-03-17 00:52:25 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:52:25.386122 | orchestrator | 2026-03-17 00:52:25 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:52:25.387072 | orchestrator | 2026-03-17 00:52:25 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:52:25.387572 | orchestrator | 2026-03-17 00:52:25 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:52:25.388358 | orchestrator | 2026-03-17 00:52:25 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:52:25.388392 | orchestrator | 2026-03-17 00:52:25 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:28.417948 | orchestrator | 2026-03-17 00:52:28 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:52:28.418402 | orchestrator | 2026-03-17 00:52:28 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:52:28.419192 | orchestrator | 2026-03-17 00:52:28 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:52:28.420304 | orchestrator | 2026-03-17 00:52:28 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:52:28.420839 | orchestrator | 2026-03-17 00:52:28 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:52:28.420900 | orchestrator | 2026-03-17 00:52:28 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:31.473389 | orchestrator | 2026-03-17 00:52:31 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:52:31.473448 | orchestrator | 2026-03-17 00:52:31 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:52:31.474317 | orchestrator | 2026-03-17 00:52:31 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:52:31.476090 | orchestrator | 2026-03-17 00:52:31 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:52:31.477695 | orchestrator | 2026-03-17 00:52:31 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:52:31.477745 | orchestrator | 2026-03-17 00:52:31 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:34.506883 | orchestrator | 2026-03-17 00:52:34 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:52:34.506940 | orchestrator | 2026-03-17 00:52:34 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:52:34.508752 | orchestrator | 2026-03-17 00:52:34 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:52:34.509907 | orchestrator | 2026-03-17 00:52:34 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:52:34.511356 | orchestrator | 2026-03-17 00:52:34 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:52:34.511521 | orchestrator | 2026-03-17 00:52:34 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:37.544087 | orchestrator | 2026-03-17 00:52:37 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:52:37.544212 | orchestrator | 2026-03-17 00:52:37 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:52:37.547303 | orchestrator | 2026-03-17 00:52:37 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:52:37.550801 | orchestrator | 2026-03-17 00:52:37 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:52:37.555875 | orchestrator | 2026-03-17 00:52:37 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:52:37.557165 | orchestrator | 2026-03-17 00:52:37 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:40.583343 | orchestrator | 2026-03-17 00:52:40 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:52:40.584147 | orchestrator | 2026-03-17 00:52:40 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:52:40.585724 | orchestrator | 2026-03-17 00:52:40 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:52:40.586682 | orchestrator | 2026-03-17 00:52:40 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:52:40.587495 | orchestrator | 2026-03-17 00:52:40 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:52:40.587534 | orchestrator | 2026-03-17 00:52:40 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:43.616710 | orchestrator | 2026-03-17 00:52:43 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:52:43.618530 | orchestrator | 2026-03-17 00:52:43 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:52:43.620601 | orchestrator | 2026-03-17 00:52:43 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:52:43.621649 | orchestrator | 2026-03-17 00:52:43 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:52:43.622970 | orchestrator | 2026-03-17 00:52:43 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:52:43.623158 | orchestrator | 2026-03-17 00:52:43 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:46.659900 | orchestrator | 2026-03-17 00:52:46 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:52:46.660683 | orchestrator | 2026-03-17 00:52:46 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:52:46.661511 | orchestrator | 2026-03-17 00:52:46 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:52:46.662338 | orchestrator | 2026-03-17 00:52:46 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:52:46.663049 | orchestrator | 2026-03-17 00:52:46 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:52:46.663163 | orchestrator | 2026-03-17 00:52:46 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:50.408121 | orchestrator | 2026-03-17 00:52:50 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:52:50.408551 | orchestrator | 2026-03-17 00:52:50 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:52:50.408568 | orchestrator | 2026-03-17 00:52:50 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:52:50.408573 | orchestrator | 2026-03-17 00:52:50 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:52:50.408577 | orchestrator | 2026-03-17 00:52:50 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:52:50.408581 | orchestrator | 2026-03-17 00:52:50 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:53.431344 | orchestrator | 2026-03-17 00:52:53 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:52:53.431397 | orchestrator | 2026-03-17 00:52:53 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:52:53.431405 | orchestrator | 2026-03-17 00:52:53 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:52:53.432290 | orchestrator | 2026-03-17 00:52:53 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:52:53.432948 | orchestrator | 2026-03-17 00:52:53 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:52:53.432979 | orchestrator | 2026-03-17 00:52:53 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:56.454049 | orchestrator | 2026-03-17 00:52:56 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:52:56.457607 | orchestrator | 2026-03-17 00:52:56 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state STARTED 2026-03-17 00:52:56.458348 | orchestrator | 2026-03-17 00:52:56 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:52:56.459467 | orchestrator | 2026-03-17 00:52:56 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:52:56.459975 | orchestrator | 2026-03-17 00:52:56 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:52:56.460010 | orchestrator | 2026-03-17 00:52:56 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:59.484666 | orchestrator | 2026-03-17 00:52:59 | INFO  | Task f3730e91-e299-4b96-a106-1565d63dce2f is in state STARTED 2026-03-17 00:52:59.484978 | orchestrator | 2026-03-17 00:52:59 | INFO  | Task f18b9757-28be-4e39-ba6e-369d24fe653b is in state STARTED 2026-03-17 00:52:59.488175 | orchestrator | 2026-03-17 00:52:59 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:52:59.489196 | orchestrator | 2026-03-17 00:52:59.489227 | orchestrator | 2026-03-17 00:52:59 | INFO  | Task cd7055ac-48a2-40a8-a46a-ba4a384263e9 is in state SUCCESS 2026-03-17 00:52:59.490604 | orchestrator | 2026-03-17 00:52:59.490628 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-17 00:52:59.490633 | orchestrator | 2026-03-17 00:52:59.490637 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-17 00:52:59.490641 | orchestrator | Tuesday 17 March 2026 00:48:21 +0000 (0:00:00.171) 0:00:00.171 ********* 2026-03-17 00:52:59.490646 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:52:59.490653 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:52:59.490661 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:52:59.490670 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:52:59.490675 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:52:59.490680 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:52:59.490686 | orchestrator | 2026-03-17 00:52:59.490692 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-17 00:52:59.490697 | orchestrator | Tuesday 17 March 2026 00:48:21 +0000 (0:00:00.653) 0:00:00.824 ********* 2026-03-17 00:52:59.490702 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:52:59.490708 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:52:59.490714 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:52:59.490719 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:52:59.490725 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:52:59.490730 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:52:59.490736 | orchestrator | 2026-03-17 00:52:59.490742 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-17 00:52:59.490748 | orchestrator | Tuesday 17 March 2026 00:48:22 +0000 (0:00:00.620) 0:00:01.444 ********* 2026-03-17 00:52:59.490754 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:52:59.490760 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:52:59.490766 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:52:59.490772 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:52:59.490778 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:52:59.490784 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:52:59.490790 | orchestrator | 2026-03-17 00:52:59.490794 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-17 00:52:59.490798 | orchestrator | Tuesday 17 March 2026 00:48:22 +0000 (0:00:00.641) 0:00:02.085 ********* 2026-03-17 00:52:59.490802 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:52:59.490809 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:52:59.490817 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:52:59.490825 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:52:59.490831 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:52:59.490837 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:52:59.490843 | orchestrator | 2026-03-17 00:52:59.490849 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-17 00:52:59.490855 | orchestrator | Tuesday 17 March 2026 00:48:24 +0000 (0:00:01.755) 0:00:03.841 ********* 2026-03-17 00:52:59.490862 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:52:59.490867 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:52:59.490873 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:52:59.490878 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:52:59.490895 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:52:59.490902 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:52:59.490908 | orchestrator | 2026-03-17 00:52:59.490913 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-17 00:52:59.490920 | orchestrator | Tuesday 17 March 2026 00:48:25 +0000 (0:00:01.274) 0:00:05.116 ********* 2026-03-17 00:52:59.490926 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:52:59.490932 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:52:59.490937 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:52:59.490943 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:52:59.490949 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:52:59.490955 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:52:59.490960 | orchestrator | 2026-03-17 00:52:59.490966 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-17 00:52:59.490972 | orchestrator | Tuesday 17 March 2026 00:48:27 +0000 (0:00:01.697) 0:00:06.814 ********* 2026-03-17 00:52:59.490979 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:52:59.490985 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:52:59.490992 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:52:59.490998 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:52:59.491004 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:52:59.491010 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:52:59.491016 | orchestrator | 2026-03-17 00:52:59.491022 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-17 00:52:59.491029 | orchestrator | Tuesday 17 March 2026 00:48:28 +0000 (0:00:00.681) 0:00:07.495 ********* 2026-03-17 00:52:59.491036 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:52:59.491057 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:52:59.491064 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:52:59.491068 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:52:59.491072 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:52:59.491076 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:52:59.491080 | orchestrator | 2026-03-17 00:52:59.491084 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-17 00:52:59.491088 | orchestrator | Tuesday 17 March 2026 00:48:29 +0000 (0:00:00.730) 0:00:08.226 ********* 2026-03-17 00:52:59.491092 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-17 00:52:59.491096 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-17 00:52:59.491102 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:52:59.491111 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-17 00:52:59.491120 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-17 00:52:59.491126 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-17 00:52:59.491132 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-17 00:52:59.491138 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:52:59.491144 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-17 00:52:59.491149 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-17 00:52:59.491165 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:52:59.491171 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:52:59.491178 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-17 00:52:59.491185 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-17 00:52:59.491191 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-17 00:52:59.491198 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-17 00:52:59.491201 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:52:59.491205 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:52:59.491215 | orchestrator | 2026-03-17 00:52:59.491220 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-17 00:52:59.491225 | orchestrator | Tuesday 17 March 2026 00:48:29 +0000 (0:00:00.654) 0:00:08.881 ********* 2026-03-17 00:52:59.491229 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:52:59.491233 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:52:59.491237 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:52:59.491242 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:52:59.491246 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:52:59.491250 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:52:59.491254 | orchestrator | 2026-03-17 00:52:59.491259 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-17 00:52:59.491263 | orchestrator | Tuesday 17 March 2026 00:48:31 +0000 (0:00:02.175) 0:00:11.057 ********* 2026-03-17 00:52:59.491268 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:52:59.491272 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:52:59.491277 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:52:59.491281 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:52:59.491285 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:52:59.491289 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:52:59.491293 | orchestrator | 2026-03-17 00:52:59.491297 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-17 00:52:59.491301 | orchestrator | Tuesday 17 March 2026 00:48:32 +0000 (0:00:00.675) 0:00:11.733 ********* 2026-03-17 00:52:59.491306 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:52:59.491310 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:52:59.491314 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:52:59.491318 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:52:59.491322 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:52:59.491326 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:52:59.491331 | orchestrator | 2026-03-17 00:52:59.491335 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-17 00:52:59.491339 | orchestrator | Tuesday 17 March 2026 00:48:38 +0000 (0:00:05.546) 0:00:17.279 ********* 2026-03-17 00:52:59.491343 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:52:59.491348 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:52:59.491352 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:52:59.491356 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:52:59.491360 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:52:59.491364 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:52:59.491369 | orchestrator | 2026-03-17 00:52:59.491373 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-17 00:52:59.491379 | orchestrator | Tuesday 17 March 2026 00:48:39 +0000 (0:00:01.241) 0:00:18.521 ********* 2026-03-17 00:52:59.491387 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:52:59.491396 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:52:59.491402 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:52:59.491408 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:52:59.491414 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:52:59.491421 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:52:59.491428 | orchestrator | 2026-03-17 00:52:59.491435 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-17 00:52:59.491735 | orchestrator | Tuesday 17 March 2026 00:48:42 +0000 (0:00:02.638) 0:00:21.159 ********* 2026-03-17 00:52:59.491753 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:52:59.491759 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:52:59.491768 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:52:59.491775 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:52:59.491782 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:52:59.491788 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:52:59.491794 | orchestrator | 2026-03-17 00:52:59.491800 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-17 00:52:59.491813 | orchestrator | Tuesday 17 March 2026 00:48:42 +0000 (0:00:00.928) 0:00:22.087 ********* 2026-03-17 00:52:59.491820 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-17 00:52:59.491826 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-17 00:52:59.491833 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-17 00:52:59.491839 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-17 00:52:59.491845 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-17 00:52:59.491849 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:52:59.491852 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-17 00:52:59.491856 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-17 00:52:59.491860 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-17 00:52:59.491863 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:52:59.491867 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-17 00:52:59.491871 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-17 00:52:59.491874 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-17 00:52:59.491878 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-17 00:52:59.491882 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:52:59.491886 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:52:59.491889 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:52:59.491893 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:52:59.491897 | orchestrator | 2026-03-17 00:52:59.491900 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-17 00:52:59.491910 | orchestrator | Tuesday 17 March 2026 00:48:43 +0000 (0:00:00.965) 0:00:23.053 ********* 2026-03-17 00:52:59.491914 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:52:59.491918 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:52:59.491922 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:52:59.491925 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:52:59.491929 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:52:59.491933 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:52:59.491937 | orchestrator | 2026-03-17 00:52:59.491940 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-17 00:52:59.491944 | orchestrator | Tuesday 17 March 2026 00:48:44 +0000 (0:00:00.774) 0:00:23.827 ********* 2026-03-17 00:52:59.491948 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:52:59.491952 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:52:59.491955 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:52:59.491959 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:52:59.491963 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:52:59.491966 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:52:59.491970 | orchestrator | 2026-03-17 00:52:59.491974 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-17 00:52:59.491978 | orchestrator | 2026-03-17 00:52:59.491981 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-17 00:52:59.491985 | orchestrator | Tuesday 17 March 2026 00:48:45 +0000 (0:00:01.277) 0:00:25.105 ********* 2026-03-17 00:52:59.491989 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:52:59.491993 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:52:59.491996 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:52:59.492000 | orchestrator | 2026-03-17 00:52:59.492004 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-17 00:52:59.492007 | orchestrator | Tuesday 17 March 2026 00:48:47 +0000 (0:00:01.571) 0:00:26.676 ********* 2026-03-17 00:52:59.492011 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:52:59.492015 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:52:59.492018 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:52:59.492022 | orchestrator | 2026-03-17 00:52:59.492026 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-17 00:52:59.492030 | orchestrator | Tuesday 17 March 2026 00:48:48 +0000 (0:00:01.227) 0:00:27.903 ********* 2026-03-17 00:52:59.492036 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:52:59.492067 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:52:59.492071 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:52:59.492075 | orchestrator | 2026-03-17 00:52:59.492079 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-17 00:52:59.492082 | orchestrator | Tuesday 17 March 2026 00:48:49 +0000 (0:00:00.922) 0:00:28.826 ********* 2026-03-17 00:52:59.492086 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:52:59.492090 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:52:59.492093 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:52:59.492097 | orchestrator | 2026-03-17 00:52:59.492101 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-17 00:52:59.492105 | orchestrator | Tuesday 17 March 2026 00:48:50 +0000 (0:00:00.745) 0:00:29.571 ********* 2026-03-17 00:52:59.492108 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:52:59.492112 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:52:59.492116 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:52:59.492119 | orchestrator | 2026-03-17 00:52:59.492123 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-17 00:52:59.492127 | orchestrator | Tuesday 17 March 2026 00:48:50 +0000 (0:00:00.249) 0:00:29.821 ********* 2026-03-17 00:52:59.492131 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:52:59.492134 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:52:59.492138 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:52:59.492142 | orchestrator | 2026-03-17 00:52:59.492146 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-17 00:52:59.492150 | orchestrator | Tuesday 17 March 2026 00:48:51 +0000 (0:00:01.180) 0:00:31.002 ********* 2026-03-17 00:52:59.492153 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:52:59.492157 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:52:59.492161 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:52:59.492165 | orchestrator | 2026-03-17 00:52:59.492168 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-17 00:52:59.492174 | orchestrator | Tuesday 17 March 2026 00:48:53 +0000 (0:00:02.080) 0:00:33.083 ********* 2026-03-17 00:52:59.492178 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:52:59.492182 | orchestrator | 2026-03-17 00:52:59.492186 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-17 00:52:59.492190 | orchestrator | Tuesday 17 March 2026 00:48:54 +0000 (0:00:00.468) 0:00:33.552 ********* 2026-03-17 00:52:59.492193 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:52:59.492197 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:52:59.492201 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:52:59.492204 | orchestrator | 2026-03-17 00:52:59.492208 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-17 00:52:59.492212 | orchestrator | Tuesday 17 March 2026 00:48:56 +0000 (0:00:02.328) 0:00:35.881 ********* 2026-03-17 00:52:59.492216 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:52:59.492219 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:52:59.492223 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:52:59.492227 | orchestrator | 2026-03-17 00:52:59.492231 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-17 00:52:59.492234 | orchestrator | Tuesday 17 March 2026 00:48:57 +0000 (0:00:00.723) 0:00:36.604 ********* 2026-03-17 00:52:59.492238 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:52:59.492242 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:52:59.492245 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:52:59.492249 | orchestrator | 2026-03-17 00:52:59.492253 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-17 00:52:59.492257 | orchestrator | Tuesday 17 March 2026 00:48:58 +0000 (0:00:00.843) 0:00:37.448 ********* 2026-03-17 00:52:59.492260 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:52:59.492267 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:52:59.492271 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:52:59.492274 | orchestrator | 2026-03-17 00:52:59.492278 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-17 00:52:59.492285 | orchestrator | Tuesday 17 March 2026 00:48:59 +0000 (0:00:01.493) 0:00:38.942 ********* 2026-03-17 00:52:59.492289 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:52:59.492292 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:52:59.492296 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:52:59.492300 | orchestrator | 2026-03-17 00:52:59.492304 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-17 00:52:59.492307 | orchestrator | Tuesday 17 March 2026 00:49:00 +0000 (0:00:00.809) 0:00:39.751 ********* 2026-03-17 00:52:59.492311 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:52:59.492315 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:52:59.492319 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:52:59.492323 | orchestrator | 2026-03-17 00:52:59.492326 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-17 00:52:59.492330 | orchestrator | Tuesday 17 March 2026 00:49:00 +0000 (0:00:00.326) 0:00:40.078 ********* 2026-03-17 00:52:59.492334 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:52:59.492338 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:52:59.492341 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:52:59.492345 | orchestrator | 2026-03-17 00:52:59.492349 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-17 00:52:59.492353 | orchestrator | Tuesday 17 March 2026 00:49:02 +0000 (0:00:01.222) 0:00:41.300 ********* 2026-03-17 00:52:59.492356 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:52:59.492360 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:52:59.492364 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:52:59.492368 | orchestrator | 2026-03-17 00:52:59.492371 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-17 00:52:59.492376 | orchestrator | Tuesday 17 March 2026 00:49:04 +0000 (0:00:02.759) 0:00:44.059 ********* 2026-03-17 00:52:59.492383 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:52:59.492393 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:52:59.492399 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:52:59.492405 | orchestrator | 2026-03-17 00:52:59.492412 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-17 00:52:59.492420 | orchestrator | Tuesday 17 March 2026 00:49:05 +0000 (0:00:00.594) 0:00:44.654 ********* 2026-03-17 00:52:59.492427 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-17 00:52:59.492435 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-17 00:52:59.492442 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-17 00:52:59.492447 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-17 00:52:59.492451 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-17 00:52:59.492454 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-17 00:52:59.492458 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-17 00:52:59.492462 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-17 00:52:59.492466 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-17 00:52:59.492475 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-17 00:52:59.492479 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-17 00:52:59.492483 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-17 00:52:59.492487 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:52:59.492490 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:52:59.492494 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:52:59.492498 | orchestrator | 2026-03-17 00:52:59.492502 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-17 00:52:59.492505 | orchestrator | Tuesday 17 March 2026 00:49:48 +0000 (0:00:43.381) 0:01:28.035 ********* 2026-03-17 00:52:59.492509 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:52:59.492513 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:52:59.492516 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:52:59.492520 | orchestrator | 2026-03-17 00:52:59.492524 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-17 00:52:59.492528 | orchestrator | Tuesday 17 March 2026 00:49:49 +0000 (0:00:00.286) 0:01:28.322 ********* 2026-03-17 00:52:59.492531 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:52:59.492535 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:52:59.492539 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:52:59.492542 | orchestrator | 2026-03-17 00:52:59.492546 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-17 00:52:59.492550 | orchestrator | Tuesday 17 March 2026 00:49:50 +0000 (0:00:01.072) 0:01:29.394 ********* 2026-03-17 00:52:59.492554 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:52:59.492557 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:52:59.492562 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:52:59.492568 | orchestrator | 2026-03-17 00:52:59.492578 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-17 00:52:59.492585 | orchestrator | Tuesday 17 March 2026 00:49:51 +0000 (0:00:01.219) 0:01:30.614 ********* 2026-03-17 00:52:59.492591 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:52:59.492597 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:52:59.492604 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:52:59.492610 | orchestrator | 2026-03-17 00:52:59.492614 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-17 00:52:59.492618 | orchestrator | Tuesday 17 March 2026 00:50:30 +0000 (0:00:39.374) 0:02:09.988 ********* 2026-03-17 00:52:59.492622 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:52:59.492626 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:52:59.492629 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:52:59.492633 | orchestrator | 2026-03-17 00:52:59.492637 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-17 00:52:59.492641 | orchestrator | Tuesday 17 March 2026 00:50:31 +0000 (0:00:00.662) 0:02:10.651 ********* 2026-03-17 00:52:59.492644 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:52:59.492648 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:52:59.492652 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:52:59.492655 | orchestrator | 2026-03-17 00:52:59.492659 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-17 00:52:59.492663 | orchestrator | Tuesday 17 March 2026 00:50:32 +0000 (0:00:00.596) 0:02:11.248 ********* 2026-03-17 00:52:59.492666 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:52:59.492670 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:52:59.492674 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:52:59.492677 | orchestrator | 2026-03-17 00:52:59.492681 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-17 00:52:59.492685 | orchestrator | Tuesday 17 March 2026 00:50:32 +0000 (0:00:00.631) 0:02:11.880 ********* 2026-03-17 00:52:59.492692 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:52:59.492696 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:52:59.492700 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:52:59.492704 | orchestrator | 2026-03-17 00:52:59.492707 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-17 00:52:59.492711 | orchestrator | Tuesday 17 March 2026 00:50:33 +0000 (0:00:00.845) 0:02:12.725 ********* 2026-03-17 00:52:59.492715 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:52:59.492718 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:52:59.492722 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:52:59.492726 | orchestrator | 2026-03-17 00:52:59.492730 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-17 00:52:59.492733 | orchestrator | Tuesday 17 March 2026 00:50:33 +0000 (0:00:00.291) 0:02:13.017 ********* 2026-03-17 00:52:59.492737 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:52:59.492741 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:52:59.492745 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:52:59.492748 | orchestrator | 2026-03-17 00:52:59.492752 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-17 00:52:59.492756 | orchestrator | Tuesday 17 March 2026 00:50:34 +0000 (0:00:00.591) 0:02:13.609 ********* 2026-03-17 00:52:59.492759 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:52:59.492763 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:52:59.492767 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:52:59.492771 | orchestrator | 2026-03-17 00:52:59.492774 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-17 00:52:59.492778 | orchestrator | Tuesday 17 March 2026 00:50:35 +0000 (0:00:00.568) 0:02:14.177 ********* 2026-03-17 00:52:59.492782 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:52:59.492786 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:52:59.492789 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:52:59.492793 | orchestrator | 2026-03-17 00:52:59.492797 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-17 00:52:59.492800 | orchestrator | Tuesday 17 March 2026 00:50:35 +0000 (0:00:00.964) 0:02:15.142 ********* 2026-03-17 00:52:59.492804 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:52:59.492808 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:52:59.492811 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:52:59.492815 | orchestrator | 2026-03-17 00:52:59.492822 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-17 00:52:59.492825 | orchestrator | Tuesday 17 March 2026 00:50:36 +0000 (0:00:00.713) 0:02:15.855 ********* 2026-03-17 00:52:59.492829 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:52:59.492833 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:52:59.492837 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:52:59.492840 | orchestrator | 2026-03-17 00:52:59.492844 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-17 00:52:59.492848 | orchestrator | Tuesday 17 March 2026 00:50:36 +0000 (0:00:00.287) 0:02:16.143 ********* 2026-03-17 00:52:59.492851 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:52:59.492855 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:52:59.492859 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:52:59.492863 | orchestrator | 2026-03-17 00:52:59.492866 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-17 00:52:59.492870 | orchestrator | Tuesday 17 March 2026 00:50:37 +0000 (0:00:00.287) 0:02:16.430 ********* 2026-03-17 00:52:59.492874 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:52:59.492877 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:52:59.492881 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:52:59.492885 | orchestrator | 2026-03-17 00:52:59.492888 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-17 00:52:59.492892 | orchestrator | Tuesday 17 March 2026 00:50:38 +0000 (0:00:00.814) 0:02:17.244 ********* 2026-03-17 00:52:59.492896 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:52:59.492902 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:52:59.492906 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:52:59.492909 | orchestrator | 2026-03-17 00:52:59.492913 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-17 00:52:59.492917 | orchestrator | Tuesday 17 March 2026 00:50:38 +0000 (0:00:00.565) 0:02:17.810 ********* 2026-03-17 00:52:59.492921 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-17 00:52:59.492927 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-17 00:52:59.492931 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-17 00:52:59.492935 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-17 00:52:59.492939 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-17 00:52:59.492942 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-17 00:52:59.492946 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-17 00:52:59.492950 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-17 00:52:59.492954 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-17 00:52:59.492958 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-17 00:52:59.492961 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-17 00:52:59.492965 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-17 00:52:59.492969 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-17 00:52:59.492972 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-17 00:52:59.492976 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-17 00:52:59.492980 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-17 00:52:59.492983 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-17 00:52:59.492987 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-17 00:52:59.492991 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-17 00:52:59.492995 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-17 00:52:59.492998 | orchestrator | 2026-03-17 00:52:59.493002 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-17 00:52:59.493006 | orchestrator | 2026-03-17 00:52:59.493010 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-17 00:52:59.493013 | orchestrator | Tuesday 17 March 2026 00:50:41 +0000 (0:00:02.769) 0:02:20.579 ********* 2026-03-17 00:52:59.493017 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:52:59.493021 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:52:59.493024 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:52:59.493028 | orchestrator | 2026-03-17 00:52:59.493032 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-17 00:52:59.493036 | orchestrator | Tuesday 17 March 2026 00:50:41 +0000 (0:00:00.410) 0:02:20.990 ********* 2026-03-17 00:52:59.493051 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:52:59.493055 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:52:59.493058 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:52:59.493062 | orchestrator | 2026-03-17 00:52:59.493066 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-17 00:52:59.493072 | orchestrator | Tuesday 17 March 2026 00:50:42 +0000 (0:00:00.595) 0:02:21.585 ********* 2026-03-17 00:52:59.493076 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:52:59.493079 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:52:59.493083 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:52:59.493087 | orchestrator | 2026-03-17 00:52:59.493093 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-17 00:52:59.493097 | orchestrator | Tuesday 17 March 2026 00:50:42 +0000 (0:00:00.355) 0:02:21.941 ********* 2026-03-17 00:52:59.493100 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:52:59.493104 | orchestrator | 2026-03-17 00:52:59.493108 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-17 00:52:59.493112 | orchestrator | Tuesday 17 March 2026 00:50:43 +0000 (0:00:00.536) 0:02:22.478 ********* 2026-03-17 00:52:59.493115 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:52:59.493119 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:52:59.493123 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:52:59.493127 | orchestrator | 2026-03-17 00:52:59.493130 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-17 00:52:59.493134 | orchestrator | Tuesday 17 March 2026 00:50:43 +0000 (0:00:00.251) 0:02:22.730 ********* 2026-03-17 00:52:59.493138 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:52:59.493225 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:52:59.493229 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:52:59.493233 | orchestrator | 2026-03-17 00:52:59.493237 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-17 00:52:59.493241 | orchestrator | Tuesday 17 March 2026 00:50:43 +0000 (0:00:00.292) 0:02:23.022 ********* 2026-03-17 00:52:59.493245 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:52:59.493248 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:52:59.493252 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:52:59.493256 | orchestrator | 2026-03-17 00:52:59.493260 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-17 00:52:59.493263 | orchestrator | Tuesday 17 March 2026 00:50:44 +0000 (0:00:00.219) 0:02:23.241 ********* 2026-03-17 00:52:59.493268 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:52:59.493274 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:52:59.493280 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:52:59.493284 | orchestrator | 2026-03-17 00:52:59.493294 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-17 00:52:59.493301 | orchestrator | Tuesday 17 March 2026 00:50:44 +0000 (0:00:00.821) 0:02:24.063 ********* 2026-03-17 00:52:59.493310 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:52:59.493317 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:52:59.493323 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:52:59.493329 | orchestrator | 2026-03-17 00:52:59.493335 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-17 00:52:59.493342 | orchestrator | Tuesday 17 March 2026 00:50:46 +0000 (0:00:01.160) 0:02:25.223 ********* 2026-03-17 00:52:59.493346 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:52:59.493352 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:52:59.493358 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:52:59.493364 | orchestrator | 2026-03-17 00:52:59.493371 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-17 00:52:59.493377 | orchestrator | Tuesday 17 March 2026 00:50:47 +0000 (0:00:01.412) 0:02:26.635 ********* 2026-03-17 00:52:59.493384 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:52:59.493390 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:52:59.493396 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:52:59.493402 | orchestrator | 2026-03-17 00:52:59.493408 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-17 00:52:59.493414 | orchestrator | 2026-03-17 00:52:59.493424 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-17 00:52:59.493437 | orchestrator | Tuesday 17 March 2026 00:50:57 +0000 (0:00:09.919) 0:02:36.555 ********* 2026-03-17 00:52:59.493443 | orchestrator | ok: [testbed-manager] 2026-03-17 00:52:59.493449 | orchestrator | 2026-03-17 00:52:59.493456 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-17 00:52:59.493462 | orchestrator | Tuesday 17 March 2026 00:50:58 +0000 (0:00:00.843) 0:02:37.398 ********* 2026-03-17 00:52:59.493468 | orchestrator | changed: [testbed-manager] 2026-03-17 00:52:59.493474 | orchestrator | 2026-03-17 00:52:59.493478 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-17 00:52:59.493481 | orchestrator | Tuesday 17 March 2026 00:50:58 +0000 (0:00:00.381) 0:02:37.780 ********* 2026-03-17 00:52:59.493485 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-17 00:52:59.493489 | orchestrator | 2026-03-17 00:52:59.493493 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-17 00:52:59.493496 | orchestrator | Tuesday 17 March 2026 00:50:59 +0000 (0:00:00.543) 0:02:38.323 ********* 2026-03-17 00:52:59.493500 | orchestrator | changed: [testbed-manager] 2026-03-17 00:52:59.493504 | orchestrator | 2026-03-17 00:52:59.493508 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-17 00:52:59.493512 | orchestrator | Tuesday 17 March 2026 00:50:59 +0000 (0:00:00.818) 0:02:39.141 ********* 2026-03-17 00:52:59.493515 | orchestrator | changed: [testbed-manager] 2026-03-17 00:52:59.493519 | orchestrator | 2026-03-17 00:52:59.493523 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-17 00:52:59.493526 | orchestrator | Tuesday 17 March 2026 00:51:00 +0000 (0:00:00.441) 0:02:39.583 ********* 2026-03-17 00:52:59.493530 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-17 00:52:59.493534 | orchestrator | 2026-03-17 00:52:59.493538 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-17 00:52:59.493541 | orchestrator | Tuesday 17 March 2026 00:51:01 +0000 (0:00:01.192) 0:02:40.775 ********* 2026-03-17 00:52:59.493545 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-17 00:52:59.493549 | orchestrator | 2026-03-17 00:52:59.493552 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-17 00:52:59.493556 | orchestrator | Tuesday 17 March 2026 00:51:02 +0000 (0:00:00.623) 0:02:41.399 ********* 2026-03-17 00:52:59.493560 | orchestrator | changed: [testbed-manager] 2026-03-17 00:52:59.493563 | orchestrator | 2026-03-17 00:52:59.493567 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-17 00:52:59.493571 | orchestrator | Tuesday 17 March 2026 00:51:02 +0000 (0:00:00.346) 0:02:41.745 ********* 2026-03-17 00:52:59.493578 | orchestrator | changed: [testbed-manager] 2026-03-17 00:52:59.493582 | orchestrator | 2026-03-17 00:52:59.493586 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-17 00:52:59.493590 | orchestrator | 2026-03-17 00:52:59.493593 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-17 00:52:59.493597 | orchestrator | Tuesday 17 March 2026 00:51:03 +0000 (0:00:00.799) 0:02:42.545 ********* 2026-03-17 00:52:59.493601 | orchestrator | ok: [testbed-manager] 2026-03-17 00:52:59.493604 | orchestrator | 2026-03-17 00:52:59.493608 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-17 00:52:59.493612 | orchestrator | Tuesday 17 March 2026 00:51:03 +0000 (0:00:00.116) 0:02:42.662 ********* 2026-03-17 00:52:59.493616 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-17 00:52:59.493619 | orchestrator | 2026-03-17 00:52:59.493623 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-17 00:52:59.493627 | orchestrator | Tuesday 17 March 2026 00:51:03 +0000 (0:00:00.167) 0:02:42.829 ********* 2026-03-17 00:52:59.493631 | orchestrator | ok: [testbed-manager] 2026-03-17 00:52:59.493634 | orchestrator | 2026-03-17 00:52:59.493638 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-17 00:52:59.493705 | orchestrator | Tuesday 17 March 2026 00:51:04 +0000 (0:00:00.879) 0:02:43.708 ********* 2026-03-17 00:52:59.493714 | orchestrator | ok: [testbed-manager] 2026-03-17 00:52:59.493720 | orchestrator | 2026-03-17 00:52:59.493726 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-17 00:52:59.493777 | orchestrator | Tuesday 17 March 2026 00:51:05 +0000 (0:00:01.253) 0:02:44.962 ********* 2026-03-17 00:52:59.493782 | orchestrator | changed: [testbed-manager] 2026-03-17 00:52:59.493786 | orchestrator | 2026-03-17 00:52:59.493789 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-17 00:52:59.493793 | orchestrator | Tuesday 17 March 2026 00:51:06 +0000 (0:00:00.727) 0:02:45.689 ********* 2026-03-17 00:52:59.493797 | orchestrator | ok: [testbed-manager] 2026-03-17 00:52:59.493801 | orchestrator | 2026-03-17 00:52:59.493809 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-17 00:52:59.493815 | orchestrator | Tuesday 17 March 2026 00:51:06 +0000 (0:00:00.373) 0:02:46.063 ********* 2026-03-17 00:52:59.493823 | orchestrator | changed: [testbed-manager] 2026-03-17 00:52:59.493831 | orchestrator | 2026-03-17 00:52:59.493837 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-17 00:52:59.493842 | orchestrator | Tuesday 17 March 2026 00:51:13 +0000 (0:00:06.704) 0:02:52.768 ********* 2026-03-17 00:52:59.493848 | orchestrator | changed: [testbed-manager] 2026-03-17 00:52:59.493853 | orchestrator | 2026-03-17 00:52:59.493858 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-17 00:52:59.493864 | orchestrator | Tuesday 17 March 2026 00:51:26 +0000 (0:00:13.202) 0:03:05.971 ********* 2026-03-17 00:52:59.493870 | orchestrator | ok: [testbed-manager] 2026-03-17 00:52:59.493876 | orchestrator | 2026-03-17 00:52:59.493881 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-17 00:52:59.493887 | orchestrator | 2026-03-17 00:52:59.493892 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-17 00:52:59.493898 | orchestrator | Tuesday 17 March 2026 00:51:27 +0000 (0:00:00.594) 0:03:06.565 ********* 2026-03-17 00:52:59.493903 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:52:59.493909 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:52:59.493915 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:52:59.493922 | orchestrator | 2026-03-17 00:52:59.493928 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-17 00:52:59.493934 | orchestrator | Tuesday 17 March 2026 00:51:27 +0000 (0:00:00.355) 0:03:06.920 ********* 2026-03-17 00:52:59.493941 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:52:59.493946 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:52:59.493950 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:52:59.493954 | orchestrator | 2026-03-17 00:52:59.493958 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-17 00:52:59.493962 | orchestrator | Tuesday 17 March 2026 00:51:28 +0000 (0:00:00.319) 0:03:07.240 ********* 2026-03-17 00:52:59.493966 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:52:59.493969 | orchestrator | 2026-03-17 00:52:59.493973 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-17 00:52:59.493977 | orchestrator | Tuesday 17 March 2026 00:51:28 +0000 (0:00:00.644) 0:03:07.884 ********* 2026-03-17 00:52:59.493981 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-17 00:52:59.493985 | orchestrator | 2026-03-17 00:52:59.493988 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-17 00:52:59.493992 | orchestrator | Tuesday 17 March 2026 00:51:29 +0000 (0:00:00.797) 0:03:08.681 ********* 2026-03-17 00:52:59.493996 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 00:52:59.494000 | orchestrator | 2026-03-17 00:52:59.494003 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-17 00:52:59.494007 | orchestrator | Tuesday 17 March 2026 00:51:30 +0000 (0:00:00.822) 0:03:09.504 ********* 2026-03-17 00:52:59.494126 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:52:59.494133 | orchestrator | 2026-03-17 00:52:59.494137 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-17 00:52:59.494141 | orchestrator | Tuesday 17 March 2026 00:51:30 +0000 (0:00:00.114) 0:03:09.619 ********* 2026-03-17 00:52:59.494145 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 00:52:59.494149 | orchestrator | 2026-03-17 00:52:59.494152 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-17 00:52:59.494156 | orchestrator | Tuesday 17 March 2026 00:51:31 +0000 (0:00:00.829) 0:03:10.448 ********* 2026-03-17 00:52:59.494160 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:52:59.494163 | orchestrator | 2026-03-17 00:52:59.494167 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-17 00:52:59.494171 | orchestrator | Tuesday 17 March 2026 00:51:31 +0000 (0:00:00.096) 0:03:10.544 ********* 2026-03-17 00:52:59.494179 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:52:59.494183 | orchestrator | 2026-03-17 00:52:59.494186 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-17 00:52:59.494190 | orchestrator | Tuesday 17 March 2026 00:51:31 +0000 (0:00:00.111) 0:03:10.655 ********* 2026-03-17 00:52:59.494194 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:52:59.494198 | orchestrator | 2026-03-17 00:52:59.494201 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-17 00:52:59.494205 | orchestrator | Tuesday 17 March 2026 00:51:31 +0000 (0:00:00.138) 0:03:10.794 ********* 2026-03-17 00:52:59.494209 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:52:59.494212 | orchestrator | 2026-03-17 00:52:59.494216 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-17 00:52:59.494220 | orchestrator | Tuesday 17 March 2026 00:51:31 +0000 (0:00:00.229) 0:03:11.024 ********* 2026-03-17 00:52:59.494224 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-17 00:52:59.494227 | orchestrator | 2026-03-17 00:52:59.494231 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-17 00:52:59.494235 | orchestrator | Tuesday 17 March 2026 00:51:37 +0000 (0:00:05.188) 0:03:16.212 ********* 2026-03-17 00:52:59.494239 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-17 00:52:59.494242 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-03-17 00:52:59.494247 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-17 00:52:59.494254 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-17 00:52:59.494265 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-17 00:52:59.494271 | orchestrator | 2026-03-17 00:52:59.494278 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-17 00:52:59.494284 | orchestrator | Tuesday 17 March 2026 00:52:31 +0000 (0:00:54.375) 0:04:10.588 ********* 2026-03-17 00:52:59.494295 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 00:52:59.494302 | orchestrator | 2026-03-17 00:52:59.494309 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-17 00:52:59.494316 | orchestrator | Tuesday 17 March 2026 00:52:32 +0000 (0:00:01.119) 0:04:11.707 ********* 2026-03-17 00:52:59.494324 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-17 00:52:59.494331 | orchestrator | 2026-03-17 00:52:59.494338 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-17 00:52:59.494345 | orchestrator | Tuesday 17 March 2026 00:52:34 +0000 (0:00:01.860) 0:04:13.568 ********* 2026-03-17 00:52:59.494353 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-17 00:52:59.494362 | orchestrator | 2026-03-17 00:52:59.494368 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-17 00:52:59.494375 | orchestrator | Tuesday 17 March 2026 00:52:35 +0000 (0:00:01.015) 0:04:14.583 ********* 2026-03-17 00:52:59.494381 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:52:59.494393 | orchestrator | 2026-03-17 00:52:59.494399 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-17 00:52:59.494405 | orchestrator | Tuesday 17 March 2026 00:52:35 +0000 (0:00:00.099) 0:04:14.683 ********* 2026-03-17 00:52:59.494412 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-17 00:52:59.494418 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-17 00:52:59.494425 | orchestrator | 2026-03-17 00:52:59.494432 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-17 00:52:59.494438 | orchestrator | Tuesday 17 March 2026 00:52:37 +0000 (0:00:01.503) 0:04:16.187 ********* 2026-03-17 00:52:59.494445 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:52:59.494452 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:52:59.494460 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:52:59.494467 | orchestrator | 2026-03-17 00:52:59.494475 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-17 00:52:59.494480 | orchestrator | Tuesday 17 March 2026 00:52:37 +0000 (0:00:00.284) 0:04:16.471 ********* 2026-03-17 00:52:59.494485 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:52:59.494489 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:52:59.494494 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:52:59.494498 | orchestrator | 2026-03-17 00:52:59.494502 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-17 00:52:59.494506 | orchestrator | 2026-03-17 00:52:59.494511 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-17 00:52:59.494515 | orchestrator | Tuesday 17 March 2026 00:52:38 +0000 (0:00:00.823) 0:04:17.295 ********* 2026-03-17 00:52:59.494519 | orchestrator | ok: [testbed-manager] 2026-03-17 00:52:59.494523 | orchestrator | 2026-03-17 00:52:59.494527 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-17 00:52:59.494532 | orchestrator | Tuesday 17 March 2026 00:52:38 +0000 (0:00:00.102) 0:04:17.398 ********* 2026-03-17 00:52:59.494536 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-17 00:52:59.494540 | orchestrator | 2026-03-17 00:52:59.494545 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-17 00:52:59.494549 | orchestrator | Tuesday 17 March 2026 00:52:38 +0000 (0:00:00.160) 0:04:17.558 ********* 2026-03-17 00:52:59.494553 | orchestrator | changed: [testbed-manager] 2026-03-17 00:52:59.494557 | orchestrator | 2026-03-17 00:52:59.494562 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-17 00:52:59.494566 | orchestrator | 2026-03-17 00:52:59.494570 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-17 00:52:59.494574 | orchestrator | Tuesday 17 March 2026 00:52:43 +0000 (0:00:05.033) 0:04:22.592 ********* 2026-03-17 00:52:59.494579 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:52:59.494583 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:52:59.494587 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:52:59.494591 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:52:59.494596 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:52:59.494603 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:52:59.494607 | orchestrator | 2026-03-17 00:52:59.494612 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-17 00:52:59.494616 | orchestrator | Tuesday 17 March 2026 00:52:44 +0000 (0:00:00.755) 0:04:23.347 ********* 2026-03-17 00:52:59.494620 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-17 00:52:59.494625 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-17 00:52:59.494629 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-17 00:52:59.494633 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-17 00:52:59.494636 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-17 00:52:59.494644 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-17 00:52:59.494648 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-17 00:52:59.494652 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-17 00:52:59.494655 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-17 00:52:59.494659 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-17 00:52:59.494663 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-17 00:52:59.494666 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-17 00:52:59.494674 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-17 00:52:59.494678 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-17 00:52:59.494682 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-17 00:52:59.494686 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-17 00:52:59.494689 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-17 00:52:59.494693 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-17 00:52:59.494697 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-17 00:52:59.494700 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-17 00:52:59.494704 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-17 00:52:59.494708 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-17 00:52:59.494711 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-17 00:52:59.494715 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-17 00:52:59.494719 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-17 00:52:59.494723 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-17 00:52:59.494726 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-17 00:52:59.494730 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-17 00:52:59.494734 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-17 00:52:59.494737 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-17 00:52:59.494741 | orchestrator | 2026-03-17 00:52:59.494745 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-17 00:52:59.494749 | orchestrator | Tuesday 17 March 2026 00:52:55 +0000 (0:00:11.262) 0:04:34.610 ********* 2026-03-17 00:52:59.494752 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:52:59.494756 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:52:59.494760 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:52:59.494764 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:52:59.494767 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:52:59.494771 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:52:59.494775 | orchestrator | 2026-03-17 00:52:59.494781 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-17 00:52:59.494788 | orchestrator | Tuesday 17 March 2026 00:52:56 +0000 (0:00:00.694) 0:04:35.305 ********* 2026-03-17 00:52:59.494797 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:52:59.494802 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:52:59.494812 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:52:59.494818 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:52:59.494823 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:52:59.494829 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:52:59.494834 | orchestrator | 2026-03-17 00:52:59.494840 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:52:59.494846 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:52:59.494855 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-17 00:52:59.494862 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-17 00:52:59.494868 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-17 00:52:59.494875 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-17 00:52:59.494881 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-17 00:52:59.494888 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-17 00:52:59.494894 | orchestrator | 2026-03-17 00:52:59.494899 | orchestrator | 2026-03-17 00:52:59.494903 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:52:59.494907 | orchestrator | Tuesday 17 March 2026 00:52:56 +0000 (0:00:00.420) 0:04:35.725 ********* 2026-03-17 00:52:59.494911 | orchestrator | =============================================================================== 2026-03-17 00:52:59.494914 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 54.38s 2026-03-17 00:52:59.494918 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.38s 2026-03-17 00:52:59.494922 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 39.37s 2026-03-17 00:52:59.494929 | orchestrator | kubectl : Install required packages ------------------------------------ 13.20s 2026-03-17 00:52:59.494933 | orchestrator | Manage labels ---------------------------------------------------------- 11.26s 2026-03-17 00:52:59.494938 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.92s 2026-03-17 00:52:59.494944 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.70s 2026-03-17 00:52:59.494950 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.55s 2026-03-17 00:52:59.494957 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.19s 2026-03-17 00:52:59.494963 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.03s 2026-03-17 00:52:59.494968 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.77s 2026-03-17 00:52:59.494975 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.76s 2026-03-17 00:52:59.494981 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.64s 2026-03-17 00:52:59.494987 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.33s 2026-03-17 00:52:59.494993 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 2.18s 2026-03-17 00:52:59.495000 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 2.08s 2026-03-17 00:52:59.495006 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.86s 2026-03-17 00:52:59.495012 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.76s 2026-03-17 00:52:59.495021 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 1.70s 2026-03-17 00:52:59.495025 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.57s 2026-03-17 00:52:59.495029 | orchestrator | 2026-03-17 00:52:59 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:52:59.495033 | orchestrator | 2026-03-17 00:52:59 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:52:59.495049 | orchestrator | 2026-03-17 00:52:59 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:52:59.495053 | orchestrator | 2026-03-17 00:52:59 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:02.518784 | orchestrator | 2026-03-17 00:53:02 | INFO  | Task f3730e91-e299-4b96-a106-1565d63dce2f is in state STARTED 2026-03-17 00:53:02.519185 | orchestrator | 2026-03-17 00:53:02 | INFO  | Task f18b9757-28be-4e39-ba6e-369d24fe653b is in state STARTED 2026-03-17 00:53:02.524116 | orchestrator | 2026-03-17 00:53:02 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:53:02.524694 | orchestrator | 2026-03-17 00:53:02 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:53:02.525219 | orchestrator | 2026-03-17 00:53:02 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:53:02.525856 | orchestrator | 2026-03-17 00:53:02 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:53:02.525883 | orchestrator | 2026-03-17 00:53:02 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:05.557977 | orchestrator | 2026-03-17 00:53:05 | INFO  | Task f3730e91-e299-4b96-a106-1565d63dce2f is in state SUCCESS 2026-03-17 00:53:05.558164 | orchestrator | 2026-03-17 00:53:05 | INFO  | Task f18b9757-28be-4e39-ba6e-369d24fe653b is in state STARTED 2026-03-17 00:53:05.559748 | orchestrator | 2026-03-17 00:53:05 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:53:05.560358 | orchestrator | 2026-03-17 00:53:05 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:53:05.560992 | orchestrator | 2026-03-17 00:53:05 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:53:05.561757 | orchestrator | 2026-03-17 00:53:05 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:53:05.561825 | orchestrator | 2026-03-17 00:53:05 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:08.585678 | orchestrator | 2026-03-17 00:53:08 | INFO  | Task f18b9757-28be-4e39-ba6e-369d24fe653b is in state SUCCESS 2026-03-17 00:53:08.586811 | orchestrator | 2026-03-17 00:53:08 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:53:08.587392 | orchestrator | 2026-03-17 00:53:08 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:53:08.587987 | orchestrator | 2026-03-17 00:53:08 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:53:08.590332 | orchestrator | 2026-03-17 00:53:08 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:53:08.590382 | orchestrator | 2026-03-17 00:53:08 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:11.618291 | orchestrator | 2026-03-17 00:53:11 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state STARTED 2026-03-17 00:53:11.618485 | orchestrator | 2026-03-17 00:53:11 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:53:11.619145 | orchestrator | 2026-03-17 00:53:11 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:53:11.619769 | orchestrator | 2026-03-17 00:53:11 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:53:11.619801 | orchestrator | 2026-03-17 00:53:11 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:14.643312 | orchestrator | 2026-03-17 00:53:14.643379 | orchestrator | 2026-03-17 00:53:14.643391 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-17 00:53:14.643401 | orchestrator | 2026-03-17 00:53:14.643409 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-17 00:53:14.643418 | orchestrator | Tuesday 17 March 2026 00:53:01 +0000 (0:00:00.143) 0:00:00.143 ********* 2026-03-17 00:53:14.643427 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-17 00:53:14.643436 | orchestrator | 2026-03-17 00:53:14.643445 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-17 00:53:14.643454 | orchestrator | Tuesday 17 March 2026 00:53:02 +0000 (0:00:00.728) 0:00:00.871 ********* 2026-03-17 00:53:14.643462 | orchestrator | changed: [testbed-manager] 2026-03-17 00:53:14.643471 | orchestrator | 2026-03-17 00:53:14.643479 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-17 00:53:14.643487 | orchestrator | Tuesday 17 March 2026 00:53:04 +0000 (0:00:01.548) 0:00:02.420 ********* 2026-03-17 00:53:14.643496 | orchestrator | changed: [testbed-manager] 2026-03-17 00:53:14.643505 | orchestrator | 2026-03-17 00:53:14.643514 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:53:14.643523 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:53:14.643532 | orchestrator | 2026-03-17 00:53:14.643541 | orchestrator | 2026-03-17 00:53:14.643549 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:53:14.643558 | orchestrator | Tuesday 17 March 2026 00:53:04 +0000 (0:00:00.428) 0:00:02.848 ********* 2026-03-17 00:53:14.643566 | orchestrator | =============================================================================== 2026-03-17 00:53:14.643574 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.55s 2026-03-17 00:53:14.643583 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.73s 2026-03-17 00:53:14.643592 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.43s 2026-03-17 00:53:14.643600 | orchestrator | 2026-03-17 00:53:14.643609 | orchestrator | 2026-03-17 00:53:14.643618 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-17 00:53:14.643646 | orchestrator | 2026-03-17 00:53:14.643655 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-17 00:53:14.643663 | orchestrator | Tuesday 17 March 2026 00:53:00 +0000 (0:00:00.176) 0:00:00.176 ********* 2026-03-17 00:53:14.643672 | orchestrator | ok: [testbed-manager] 2026-03-17 00:53:14.643681 | orchestrator | 2026-03-17 00:53:14.643690 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-17 00:53:14.643699 | orchestrator | Tuesday 17 March 2026 00:53:01 +0000 (0:00:00.452) 0:00:00.629 ********* 2026-03-17 00:53:14.643707 | orchestrator | ok: [testbed-manager] 2026-03-17 00:53:14.643716 | orchestrator | 2026-03-17 00:53:14.643725 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-17 00:53:14.643734 | orchestrator | Tuesday 17 March 2026 00:53:01 +0000 (0:00:00.496) 0:00:01.125 ********* 2026-03-17 00:53:14.643754 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-17 00:53:14.643826 | orchestrator | 2026-03-17 00:53:14.643838 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-17 00:53:14.643848 | orchestrator | Tuesday 17 March 2026 00:53:02 +0000 (0:00:00.757) 0:00:01.882 ********* 2026-03-17 00:53:14.643857 | orchestrator | changed: [testbed-manager] 2026-03-17 00:53:14.643867 | orchestrator | 2026-03-17 00:53:14.643877 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-17 00:53:14.643902 | orchestrator | Tuesday 17 March 2026 00:53:04 +0000 (0:00:02.077) 0:00:03.960 ********* 2026-03-17 00:53:14.643911 | orchestrator | changed: [testbed-manager] 2026-03-17 00:53:14.643920 | orchestrator | 2026-03-17 00:53:14.643929 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-17 00:53:14.643938 | orchestrator | Tuesday 17 March 2026 00:53:05 +0000 (0:00:00.570) 0:00:04.531 ********* 2026-03-17 00:53:14.643948 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-17 00:53:14.643958 | orchestrator | 2026-03-17 00:53:14.643967 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-17 00:53:14.643976 | orchestrator | Tuesday 17 March 2026 00:53:06 +0000 (0:00:01.489) 0:00:06.021 ********* 2026-03-17 00:53:14.644255 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-17 00:53:14.644272 | orchestrator | 2026-03-17 00:53:14.644282 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-17 00:53:14.644291 | orchestrator | Tuesday 17 March 2026 00:53:07 +0000 (0:00:00.757) 0:00:06.778 ********* 2026-03-17 00:53:14.644300 | orchestrator | ok: [testbed-manager] 2026-03-17 00:53:14.644309 | orchestrator | 2026-03-17 00:53:14.644319 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-17 00:53:14.644328 | orchestrator | Tuesday 17 March 2026 00:53:07 +0000 (0:00:00.378) 0:00:07.157 ********* 2026-03-17 00:53:14.644338 | orchestrator | ok: [testbed-manager] 2026-03-17 00:53:14.644348 | orchestrator | 2026-03-17 00:53:14.644357 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:53:14.644366 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:53:14.644375 | orchestrator | 2026-03-17 00:53:14.644384 | orchestrator | 2026-03-17 00:53:14.644394 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:53:14.644403 | orchestrator | Tuesday 17 March 2026 00:53:07 +0000 (0:00:00.268) 0:00:07.426 ********* 2026-03-17 00:53:14.644412 | orchestrator | =============================================================================== 2026-03-17 00:53:14.644422 | orchestrator | Write kubeconfig file --------------------------------------------------- 2.08s 2026-03-17 00:53:14.644431 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.49s 2026-03-17 00:53:14.644440 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.76s 2026-03-17 00:53:14.644461 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.76s 2026-03-17 00:53:14.644469 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.57s 2026-03-17 00:53:14.644479 | orchestrator | Create .kube directory -------------------------------------------------- 0.50s 2026-03-17 00:53:14.644488 | orchestrator | Get home directory of operator user ------------------------------------- 0.45s 2026-03-17 00:53:14.644497 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.38s 2026-03-17 00:53:14.644507 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.27s 2026-03-17 00:53:14.644516 | orchestrator | 2026-03-17 00:53:14.644525 | orchestrator | 2026-03-17 00:53:14 | INFO  | Task efee1707-dced-4f7d-8ec3-c4ce0a3927cf is in state SUCCESS 2026-03-17 00:53:14.644534 | orchestrator | 2026-03-17 00:53:14.644544 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-03-17 00:53:14.644553 | orchestrator | 2026-03-17 00:53:14.644562 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-17 00:53:14.644572 | orchestrator | Tuesday 17 March 2026 00:51:05 +0000 (0:00:00.116) 0:00:00.116 ********* 2026-03-17 00:53:14.644581 | orchestrator | ok: [localhost] => { 2026-03-17 00:53:14.644591 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-03-17 00:53:14.644600 | orchestrator | } 2026-03-17 00:53:14.644610 | orchestrator | 2026-03-17 00:53:14.644628 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-03-17 00:53:14.644638 | orchestrator | Tuesday 17 March 2026 00:51:05 +0000 (0:00:00.034) 0:00:00.150 ********* 2026-03-17 00:53:14.644647 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-03-17 00:53:14.644657 | orchestrator | ...ignoring 2026-03-17 00:53:14.644667 | orchestrator | 2026-03-17 00:53:14.644676 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-03-17 00:53:14.644686 | orchestrator | Tuesday 17 March 2026 00:51:08 +0000 (0:00:02.756) 0:00:02.906 ********* 2026-03-17 00:53:14.644695 | orchestrator | skipping: [localhost] 2026-03-17 00:53:14.644704 | orchestrator | 2026-03-17 00:53:14.644713 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-03-17 00:53:14.644724 | orchestrator | Tuesday 17 March 2026 00:51:08 +0000 (0:00:00.044) 0:00:02.951 ********* 2026-03-17 00:53:14.644733 | orchestrator | ok: [localhost] 2026-03-17 00:53:14.644742 | orchestrator | 2026-03-17 00:53:14.644751 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 00:53:14.644760 | orchestrator | 2026-03-17 00:53:14.644769 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 00:53:14.644779 | orchestrator | Tuesday 17 March 2026 00:51:08 +0000 (0:00:00.157) 0:00:03.108 ********* 2026-03-17 00:53:14.644788 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:53:14.644797 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:53:14.644807 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:53:14.644816 | orchestrator | 2026-03-17 00:53:14.644832 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 00:53:14.644841 | orchestrator | Tuesday 17 March 2026 00:51:08 +0000 (0:00:00.401) 0:00:03.509 ********* 2026-03-17 00:53:14.644850 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-17 00:53:14.644860 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-17 00:53:14.644869 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-17 00:53:14.644878 | orchestrator | 2026-03-17 00:53:14.644887 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-17 00:53:14.644897 | orchestrator | 2026-03-17 00:53:14.644906 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-17 00:53:14.644916 | orchestrator | Tuesday 17 March 2026 00:51:09 +0000 (0:00:00.640) 0:00:04.150 ********* 2026-03-17 00:53:14.644926 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:53:14.644936 | orchestrator | 2026-03-17 00:53:14.644945 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-17 00:53:14.644954 | orchestrator | Tuesday 17 March 2026 00:51:10 +0000 (0:00:00.836) 0:00:04.986 ********* 2026-03-17 00:53:14.644964 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:53:14.645037 | orchestrator | 2026-03-17 00:53:14.645048 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-17 00:53:14.645079 | orchestrator | Tuesday 17 March 2026 00:51:11 +0000 (0:00:01.060) 0:00:06.047 ********* 2026-03-17 00:53:14.645087 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:53:14.645097 | orchestrator | 2026-03-17 00:53:14.645106 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-17 00:53:14.645114 | orchestrator | Tuesday 17 March 2026 00:51:11 +0000 (0:00:00.396) 0:00:06.444 ********* 2026-03-17 00:53:14.645123 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:53:14.645132 | orchestrator | 2026-03-17 00:53:14.645141 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-17 00:53:14.645150 | orchestrator | Tuesday 17 March 2026 00:51:12 +0000 (0:00:00.917) 0:00:07.361 ********* 2026-03-17 00:53:14.645158 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:53:14.645167 | orchestrator | 2026-03-17 00:53:14.645175 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-17 00:53:14.645198 | orchestrator | Tuesday 17 March 2026 00:51:13 +0000 (0:00:01.112) 0:00:08.474 ********* 2026-03-17 00:53:14.645207 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:53:14.645216 | orchestrator | 2026-03-17 00:53:14.645226 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-17 00:53:14.645235 | orchestrator | Tuesday 17 March 2026 00:51:15 +0000 (0:00:02.015) 0:00:10.490 ********* 2026-03-17 00:53:14.645244 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:53:14.645255 | orchestrator | 2026-03-17 00:53:14.645272 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-17 00:53:14.645281 | orchestrator | Tuesday 17 March 2026 00:51:16 +0000 (0:00:01.210) 0:00:11.700 ********* 2026-03-17 00:53:14.645290 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:53:14.645299 | orchestrator | 2026-03-17 00:53:14.645307 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-17 00:53:14.645316 | orchestrator | Tuesday 17 March 2026 00:51:17 +0000 (0:00:00.958) 0:00:12.658 ********* 2026-03-17 00:53:14.645324 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:53:14.645332 | orchestrator | 2026-03-17 00:53:14.645341 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-17 00:53:14.645350 | orchestrator | Tuesday 17 March 2026 00:51:18 +0000 (0:00:00.319) 0:00:12.977 ********* 2026-03-17 00:53:14.645358 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:53:14.645366 | orchestrator | 2026-03-17 00:53:14.645375 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-17 00:53:14.645383 | orchestrator | Tuesday 17 March 2026 00:51:18 +0000 (0:00:00.448) 0:00:13.426 ********* 2026-03-17 00:53:14.645396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:53:14.645409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:53:14.645419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:53:14.645434 | orchestrator | 2026-03-17 00:53:14.645444 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-17 00:53:14.645457 | orchestrator | Tuesday 17 March 2026 00:51:19 +0000 (0:00:00.948) 0:00:14.375 ********* 2026-03-17 00:53:14.645515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:53:14.645537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:53:14.645547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:53:14.645562 | orchestrator | 2026-03-17 00:53:14.645570 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-17 00:53:14.645579 | orchestrator | Tuesday 17 March 2026 00:51:21 +0000 (0:00:01.698) 0:00:16.074 ********* 2026-03-17 00:53:14.645588 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-17 00:53:14.645596 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-17 00:53:14.645605 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-17 00:53:14.645613 | orchestrator | 2026-03-17 00:53:14.645622 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-17 00:53:14.645630 | orchestrator | Tuesday 17 March 2026 00:51:22 +0000 (0:00:01.352) 0:00:17.427 ********* 2026-03-17 00:53:14.645638 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-17 00:53:14.645647 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-17 00:53:14.645661 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-17 00:53:14.645670 | orchestrator | 2026-03-17 00:53:14.645679 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-17 00:53:14.645687 | orchestrator | Tuesday 17 March 2026 00:51:24 +0000 (0:00:02.279) 0:00:19.707 ********* 2026-03-17 00:53:14.645696 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-17 00:53:14.645704 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-17 00:53:14.645712 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-17 00:53:14.645721 | orchestrator | 2026-03-17 00:53:14.645729 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-17 00:53:14.645737 | orchestrator | Tuesday 17 March 2026 00:51:26 +0000 (0:00:01.744) 0:00:21.451 ********* 2026-03-17 00:53:14.645746 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-17 00:53:14.645755 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-17 00:53:14.645764 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-17 00:53:14.645772 | orchestrator | 2026-03-17 00:53:14.645780 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-17 00:53:14.645788 | orchestrator | Tuesday 17 March 2026 00:51:28 +0000 (0:00:02.242) 0:00:23.694 ********* 2026-03-17 00:53:14.645797 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-17 00:53:14.645805 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-17 00:53:14.645813 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-17 00:53:14.645822 | orchestrator | 2026-03-17 00:53:14.645831 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-17 00:53:14.645840 | orchestrator | Tuesday 17 March 2026 00:51:30 +0000 (0:00:01.709) 0:00:25.403 ********* 2026-03-17 00:53:14.645848 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-17 00:53:14.645856 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-17 00:53:14.645870 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-17 00:53:14.645879 | orchestrator | 2026-03-17 00:53:14.645887 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-17 00:53:14.645895 | orchestrator | Tuesday 17 March 2026 00:51:32 +0000 (0:00:01.450) 0:00:26.854 ********* 2026-03-17 00:53:14.645907 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:53:14.645916 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:53:14.645925 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:53:14.645934 | orchestrator | 2026-03-17 00:53:14.645942 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-17 00:53:14.645950 | orchestrator | Tuesday 17 March 2026 00:51:32 +0000 (0:00:00.731) 0:00:27.585 ********* 2026-03-17 00:53:14.645959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:53:14.645982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:53:14.645992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:53:14.646007 | orchestrator | 2026-03-17 00:53:14.646157 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-17 00:53:14.646169 | orchestrator | Tuesday 17 March 2026 00:51:34 +0000 (0:00:01.405) 0:00:28.991 ********* 2026-03-17 00:53:14.646177 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:53:14.646186 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:53:14.646194 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:53:14.646203 | orchestrator | 2026-03-17 00:53:14.646212 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-17 00:53:14.646221 | orchestrator | Tuesday 17 March 2026 00:51:35 +0000 (0:00:00.892) 0:00:29.883 ********* 2026-03-17 00:53:14.646229 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:53:14.646237 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:53:14.646245 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:53:14.646254 | orchestrator | 2026-03-17 00:53:14.646262 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-17 00:53:14.646275 | orchestrator | Tuesday 17 March 2026 00:51:41 +0000 (0:00:06.639) 0:00:36.523 ********* 2026-03-17 00:53:14.646284 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:53:14.646292 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:53:14.646300 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:53:14.646308 | orchestrator | 2026-03-17 00:53:14.646316 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-17 00:53:14.646325 | orchestrator | 2026-03-17 00:53:14.646333 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-17 00:53:14.646341 | orchestrator | Tuesday 17 March 2026 00:51:42 +0000 (0:00:00.280) 0:00:36.803 ********* 2026-03-17 00:53:14.646349 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:53:14.646358 | orchestrator | 2026-03-17 00:53:14.646366 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-17 00:53:14.646375 | orchestrator | Tuesday 17 March 2026 00:51:42 +0000 (0:00:00.631) 0:00:37.434 ********* 2026-03-17 00:53:14.646383 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:53:14.646392 | orchestrator | 2026-03-17 00:53:14.646400 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-17 00:53:14.646409 | orchestrator | Tuesday 17 March 2026 00:51:42 +0000 (0:00:00.234) 0:00:37.668 ********* 2026-03-17 00:53:14.646417 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:53:14.646425 | orchestrator | 2026-03-17 00:53:14.646434 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-17 00:53:14.646442 | orchestrator | Tuesday 17 March 2026 00:51:44 +0000 (0:00:02.036) 0:00:39.704 ********* 2026-03-17 00:53:14.646451 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:53:14.646459 | orchestrator | 2026-03-17 00:53:14.646468 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-17 00:53:14.646477 | orchestrator | 2026-03-17 00:53:14.646485 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-17 00:53:14.646494 | orchestrator | Tuesday 17 March 2026 00:52:38 +0000 (0:00:53.952) 0:01:33.657 ********* 2026-03-17 00:53:14.646502 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:53:14.646511 | orchestrator | 2026-03-17 00:53:14.646519 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-17 00:53:14.646527 | orchestrator | Tuesday 17 March 2026 00:52:39 +0000 (0:00:00.555) 0:01:34.212 ********* 2026-03-17 00:53:14.646536 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:53:14.646545 | orchestrator | 2026-03-17 00:53:14.646553 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-17 00:53:14.646562 | orchestrator | Tuesday 17 March 2026 00:52:39 +0000 (0:00:00.188) 0:01:34.401 ********* 2026-03-17 00:53:14.646570 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:53:14.646579 | orchestrator | 2026-03-17 00:53:14.646587 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-17 00:53:14.646596 | orchestrator | Tuesday 17 March 2026 00:52:41 +0000 (0:00:01.750) 0:01:36.152 ********* 2026-03-17 00:53:14.646611 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:53:14.646620 | orchestrator | 2026-03-17 00:53:14.646635 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-17 00:53:14.646643 | orchestrator | 2026-03-17 00:53:14.646652 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-17 00:53:14.646660 | orchestrator | Tuesday 17 March 2026 00:52:54 +0000 (0:00:12.991) 0:01:49.143 ********* 2026-03-17 00:53:14.646668 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:53:14.646676 | orchestrator | 2026-03-17 00:53:14.646684 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-17 00:53:14.646692 | orchestrator | Tuesday 17 March 2026 00:52:55 +0000 (0:00:00.624) 0:01:49.767 ********* 2026-03-17 00:53:14.646700 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:53:14.646708 | orchestrator | 2026-03-17 00:53:14.646716 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-17 00:53:14.646724 | orchestrator | Tuesday 17 March 2026 00:52:55 +0000 (0:00:00.293) 0:01:50.060 ********* 2026-03-17 00:53:14.646732 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:53:14.646741 | orchestrator | 2026-03-17 00:53:14.646749 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-17 00:53:14.646757 | orchestrator | Tuesday 17 March 2026 00:52:56 +0000 (0:00:01.641) 0:01:51.702 ********* 2026-03-17 00:53:14.646765 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:53:14.646773 | orchestrator | 2026-03-17 00:53:14.646780 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-17 00:53:14.646788 | orchestrator | 2026-03-17 00:53:14.646797 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-17 00:53:14.646804 | orchestrator | Tuesday 17 March 2026 00:53:09 +0000 (0:00:12.999) 0:02:04.702 ********* 2026-03-17 00:53:14.646812 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:53:14.646821 | orchestrator | 2026-03-17 00:53:14.646829 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-17 00:53:14.646837 | orchestrator | Tuesday 17 March 2026 00:53:10 +0000 (0:00:00.593) 0:02:05.295 ********* 2026-03-17 00:53:14.646845 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-17 00:53:14.646853 | orchestrator | enable_outward_rabbitmq_True 2026-03-17 00:53:14.646861 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-17 00:53:14.646869 | orchestrator | outward_rabbitmq_restart 2026-03-17 00:53:14.646877 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:53:14.646885 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:53:14.646892 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:53:14.646900 | orchestrator | 2026-03-17 00:53:14.646908 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-17 00:53:14.646916 | orchestrator | skipping: no hosts matched 2026-03-17 00:53:14.646923 | orchestrator | 2026-03-17 00:53:14.646931 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-17 00:53:14.646939 | orchestrator | skipping: no hosts matched 2026-03-17 00:53:14.646947 | orchestrator | 2026-03-17 00:53:14.646955 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-17 00:53:14.646963 | orchestrator | skipping: no hosts matched 2026-03-17 00:53:14.646971 | orchestrator | 2026-03-17 00:53:14.646979 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:53:14.646991 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-17 00:53:14.647000 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-17 00:53:14.647009 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:53:14.647022 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:53:14.647031 | orchestrator | 2026-03-17 00:53:14.647039 | orchestrator | 2026-03-17 00:53:14.647047 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:53:14.647070 | orchestrator | Tuesday 17 March 2026 00:53:12 +0000 (0:00:01.892) 0:02:07.188 ********* 2026-03-17 00:53:14.647078 | orchestrator | =============================================================================== 2026-03-17 00:53:14.647086 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 79.94s 2026-03-17 00:53:14.647094 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.64s 2026-03-17 00:53:14.647102 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.43s 2026-03-17 00:53:14.647111 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.76s 2026-03-17 00:53:14.647120 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.28s 2026-03-17 00:53:14.647128 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.24s 2026-03-17 00:53:14.647136 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 2.02s 2026-03-17 00:53:14.647144 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 1.89s 2026-03-17 00:53:14.647152 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.81s 2026-03-17 00:53:14.647160 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.74s 2026-03-17 00:53:14.647168 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.71s 2026-03-17 00:53:14.647176 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.70s 2026-03-17 00:53:14.647183 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.45s 2026-03-17 00:53:14.647191 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.41s 2026-03-17 00:53:14.647206 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.35s 2026-03-17 00:53:14.647214 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.21s 2026-03-17 00:53:14.647222 | orchestrator | rabbitmq : Check if running RabbitMQ is at most one version behind ------ 1.11s 2026-03-17 00:53:14.647230 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.06s 2026-03-17 00:53:14.647238 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.96s 2026-03-17 00:53:14.647246 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.95s 2026-03-17 00:53:14.647254 | orchestrator | 2026-03-17 00:53:14 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:53:14.647263 | orchestrator | 2026-03-17 00:53:14 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:53:14.647270 | orchestrator | 2026-03-17 00:53:14 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:53:14.647278 | orchestrator | 2026-03-17 00:53:14 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:17.684270 | orchestrator | 2026-03-17 00:53:17 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:53:17.685833 | orchestrator | 2026-03-17 00:53:17 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:53:17.688203 | orchestrator | 2026-03-17 00:53:17 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:53:17.688352 | orchestrator | 2026-03-17 00:53:17 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:20.745632 | orchestrator | 2026-03-17 00:53:20 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:53:20.746819 | orchestrator | 2026-03-17 00:53:20 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:53:20.747417 | orchestrator | 2026-03-17 00:53:20 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:53:20.747436 | orchestrator | 2026-03-17 00:53:20 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:23.797378 | orchestrator | 2026-03-17 00:53:23 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:53:23.798052 | orchestrator | 2026-03-17 00:53:23 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:53:23.800571 | orchestrator | 2026-03-17 00:53:23 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:53:23.800601 | orchestrator | 2026-03-17 00:53:23 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:26.832377 | orchestrator | 2026-03-17 00:53:26 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:53:26.832517 | orchestrator | 2026-03-17 00:53:26 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:53:26.836038 | orchestrator | 2026-03-17 00:53:26 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:53:26.836117 | orchestrator | 2026-03-17 00:53:26 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:29.873646 | orchestrator | 2026-03-17 00:53:29 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:53:29.874609 | orchestrator | 2026-03-17 00:53:29 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:53:29.875999 | orchestrator | 2026-03-17 00:53:29 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:53:29.876247 | orchestrator | 2026-03-17 00:53:29 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:32.911882 | orchestrator | 2026-03-17 00:53:32 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:53:32.914414 | orchestrator | 2026-03-17 00:53:32 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:53:32.915767 | orchestrator | 2026-03-17 00:53:32 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:53:32.915838 | orchestrator | 2026-03-17 00:53:32 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:35.945436 | orchestrator | 2026-03-17 00:53:35 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:53:35.946309 | orchestrator | 2026-03-17 00:53:35 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:53:35.947408 | orchestrator | 2026-03-17 00:53:35 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:53:35.947707 | orchestrator | 2026-03-17 00:53:35 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:38.979004 | orchestrator | 2026-03-17 00:53:38 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:53:38.980966 | orchestrator | 2026-03-17 00:53:38 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:53:38.983134 | orchestrator | 2026-03-17 00:53:38 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:53:38.983651 | orchestrator | 2026-03-17 00:53:38 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:42.020973 | orchestrator | 2026-03-17 00:53:42 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:53:42.023051 | orchestrator | 2026-03-17 00:53:42 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:53:42.023156 | orchestrator | 2026-03-17 00:53:42 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:53:42.023164 | orchestrator | 2026-03-17 00:53:42 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:45.075648 | orchestrator | 2026-03-17 00:53:45 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:53:45.077968 | orchestrator | 2026-03-17 00:53:45 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:53:45.079021 | orchestrator | 2026-03-17 00:53:45 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:53:45.079063 | orchestrator | 2026-03-17 00:53:45 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:48.115345 | orchestrator | 2026-03-17 00:53:48 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:53:48.115669 | orchestrator | 2026-03-17 00:53:48 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:53:48.116389 | orchestrator | 2026-03-17 00:53:48 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:53:48.116757 | orchestrator | 2026-03-17 00:53:48 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:51.150454 | orchestrator | 2026-03-17 00:53:51 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:53:51.151220 | orchestrator | 2026-03-17 00:53:51 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:53:51.152320 | orchestrator | 2026-03-17 00:53:51 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:53:51.152348 | orchestrator | 2026-03-17 00:53:51 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:54.180480 | orchestrator | 2026-03-17 00:53:54 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:53:54.181272 | orchestrator | 2026-03-17 00:53:54 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:53:54.182253 | orchestrator | 2026-03-17 00:53:54 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:53:54.182287 | orchestrator | 2026-03-17 00:53:54 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:57.228654 | orchestrator | 2026-03-17 00:53:57 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:53:57.229471 | orchestrator | 2026-03-17 00:53:57 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:53:57.232431 | orchestrator | 2026-03-17 00:53:57 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:53:57.232651 | orchestrator | 2026-03-17 00:53:57 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:00.272150 | orchestrator | 2026-03-17 00:54:00 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:54:00.272406 | orchestrator | 2026-03-17 00:54:00 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:54:00.273241 | orchestrator | 2026-03-17 00:54:00 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:54:00.273257 | orchestrator | 2026-03-17 00:54:00 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:03.318927 | orchestrator | 2026-03-17 00:54:03 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:54:03.320306 | orchestrator | 2026-03-17 00:54:03 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:54:03.321757 | orchestrator | 2026-03-17 00:54:03 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:54:03.321851 | orchestrator | 2026-03-17 00:54:03 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:06.355993 | orchestrator | 2026-03-17 00:54:06 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:54:06.356064 | orchestrator | 2026-03-17 00:54:06 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:54:06.356076 | orchestrator | 2026-03-17 00:54:06 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:54:06.356121 | orchestrator | 2026-03-17 00:54:06 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:09.393711 | orchestrator | 2026-03-17 00:54:09 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:54:09.394072 | orchestrator | 2026-03-17 00:54:09 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:54:09.395208 | orchestrator | 2026-03-17 00:54:09 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state STARTED 2026-03-17 00:54:09.395260 | orchestrator | 2026-03-17 00:54:09 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:12.428439 | orchestrator | 2026-03-17 00:54:12 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:54:12.431206 | orchestrator | 2026-03-17 00:54:12 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:54:12.434067 | orchestrator | 2026-03-17 00:54:12 | INFO  | Task 2a1bb685-06ba-40fd-bc2c-7c4b9e796252 is in state SUCCESS 2026-03-17 00:54:12.436071 | orchestrator | 2026-03-17 00:54:12.436171 | orchestrator | 2026-03-17 00:54:12.436180 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 00:54:12.436186 | orchestrator | 2026-03-17 00:54:12.436191 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 00:54:12.436197 | orchestrator | Tuesday 17 March 2026 00:51:51 +0000 (0:00:00.167) 0:00:00.167 ********* 2026-03-17 00:54:12.436202 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:54:12.436208 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:54:12.436213 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:54:12.436218 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:12.436222 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:12.436227 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:12.436232 | orchestrator | 2026-03-17 00:54:12.436237 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 00:54:12.436242 | orchestrator | Tuesday 17 March 2026 00:51:51 +0000 (0:00:00.763) 0:00:00.930 ********* 2026-03-17 00:54:12.436247 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-17 00:54:12.436252 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-17 00:54:12.436256 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-17 00:54:12.436270 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-17 00:54:12.436276 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-17 00:54:12.436304 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-17 00:54:12.436310 | orchestrator | 2026-03-17 00:54:12.436347 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-17 00:54:12.436352 | orchestrator | 2026-03-17 00:54:12.436357 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-17 00:54:12.436362 | orchestrator | Tuesday 17 March 2026 00:51:52 +0000 (0:00:00.991) 0:00:01.922 ********* 2026-03-17 00:54:12.436368 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:54:12.436374 | orchestrator | 2026-03-17 00:54:12.436379 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-17 00:54:12.436398 | orchestrator | Tuesday 17 March 2026 00:51:53 +0000 (0:00:00.955) 0:00:02.877 ********* 2026-03-17 00:54:12.436405 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.436412 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.436418 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.436423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.436429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.436445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.436449 | orchestrator | 2026-03-17 00:54:12.436453 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-17 00:54:12.436459 | orchestrator | Tuesday 17 March 2026 00:51:54 +0000 (0:00:00.964) 0:00:03.842 ********* 2026-03-17 00:54:12.436464 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.436473 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.436484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.436490 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.436537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.436542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.436548 | orchestrator | 2026-03-17 00:54:12.436553 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-17 00:54:12.436559 | orchestrator | Tuesday 17 March 2026 00:51:56 +0000 (0:00:01.326) 0:00:05.168 ********* 2026-03-17 00:54:12.436564 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.436571 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.436581 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.436584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.436594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.436598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.436601 | orchestrator | 2026-03-17 00:54:12.436604 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-17 00:54:12.436607 | orchestrator | Tuesday 17 March 2026 00:51:57 +0000 (0:00:01.100) 0:00:06.268 ********* 2026-03-17 00:54:12.436610 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.436613 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.436617 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.436620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.436623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.436629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.436633 | orchestrator | 2026-03-17 00:54:12.436637 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-17 00:54:12.436645 | orchestrator | Tuesday 17 March 2026 00:51:58 +0000 (0:00:01.697) 0:00:07.966 ********* 2026-03-17 00:54:12.436653 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.436658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.436663 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.436792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.436797 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.436802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.436807 | orchestrator | 2026-03-17 00:54:12.436812 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-17 00:54:12.436817 | orchestrator | Tuesday 17 March 2026 00:52:00 +0000 (0:00:01.376) 0:00:09.342 ********* 2026-03-17 00:54:12.436822 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:54:12.436827 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:54:12.436832 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:12.436837 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:54:12.436842 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:12.436846 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:12.436851 | orchestrator | 2026-03-17 00:54:12.436856 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-17 00:54:12.436861 | orchestrator | Tuesday 17 March 2026 00:52:02 +0000 (0:00:02.354) 0:00:11.697 ********* 2026-03-17 00:54:12.436866 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-17 00:54:12.436871 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-17 00:54:12.436880 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-17 00:54:12.436888 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-17 00:54:12.436892 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-17 00:54:12.436897 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-17 00:54:12.436902 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-17 00:54:12.436907 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-17 00:54:12.436912 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-17 00:54:12.436916 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-17 00:54:12.436921 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-17 00:54:12.436929 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-17 00:54:12.436934 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-17 00:54:12.436940 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-17 00:54:12.436945 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-17 00:54:12.436950 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-17 00:54:12.436955 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-17 00:54:12.436960 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-17 00:54:12.436965 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-17 00:54:12.436970 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-17 00:54:12.436975 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-17 00:54:12.436980 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-17 00:54:12.436985 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-17 00:54:12.436990 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-17 00:54:12.436994 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-17 00:54:12.436999 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-17 00:54:12.437004 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-17 00:54:12.437008 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-17 00:54:12.437013 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-17 00:54:12.437018 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-17 00:54:12.437023 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-17 00:54:12.437031 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-17 00:54:12.437036 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-17 00:54:12.437041 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-17 00:54:12.437045 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-17 00:54:12.437051 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-17 00:54:12.437055 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-17 00:54:12.437061 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-17 00:54:12.437066 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-17 00:54:12.437071 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-17 00:54:12.437079 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-17 00:54:12.437145 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-17 00:54:12.437149 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-17 00:54:12.437153 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-17 00:54:12.437156 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-17 00:54:12.437159 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-17 00:54:12.437165 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-17 00:54:12.437168 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-17 00:54:12.437171 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-17 00:54:12.437175 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-17 00:54:12.437178 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-17 00:54:12.437181 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-17 00:54:12.437184 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-17 00:54:12.437187 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-17 00:54:12.437190 | orchestrator | 2026-03-17 00:54:12.437193 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-17 00:54:12.437196 | orchestrator | Tuesday 17 March 2026 00:52:24 +0000 (0:00:22.006) 0:00:33.703 ********* 2026-03-17 00:54:12.437199 | orchestrator | 2026-03-17 00:54:12.437202 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-17 00:54:12.437205 | orchestrator | Tuesday 17 March 2026 00:52:24 +0000 (0:00:00.061) 0:00:33.765 ********* 2026-03-17 00:54:12.437208 | orchestrator | 2026-03-17 00:54:12.437212 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-17 00:54:12.437218 | orchestrator | Tuesday 17 March 2026 00:52:24 +0000 (0:00:00.093) 0:00:33.858 ********* 2026-03-17 00:54:12.437221 | orchestrator | 2026-03-17 00:54:12.437224 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-17 00:54:12.437227 | orchestrator | Tuesday 17 March 2026 00:52:24 +0000 (0:00:00.063) 0:00:33.921 ********* 2026-03-17 00:54:12.437230 | orchestrator | 2026-03-17 00:54:12.437233 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-17 00:54:12.437236 | orchestrator | Tuesday 17 March 2026 00:52:24 +0000 (0:00:00.058) 0:00:33.980 ********* 2026-03-17 00:54:12.437239 | orchestrator | 2026-03-17 00:54:12.437242 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-17 00:54:12.437245 | orchestrator | Tuesday 17 March 2026 00:52:24 +0000 (0:00:00.062) 0:00:34.043 ********* 2026-03-17 00:54:12.437248 | orchestrator | 2026-03-17 00:54:12.437251 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-17 00:54:12.437254 | orchestrator | Tuesday 17 March 2026 00:52:25 +0000 (0:00:00.068) 0:00:34.111 ********* 2026-03-17 00:54:12.437257 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:12.437261 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:12.437264 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:54:12.437267 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:54:12.437270 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:54:12.437273 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:12.437276 | orchestrator | 2026-03-17 00:54:12.437279 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-17 00:54:12.437282 | orchestrator | Tuesday 17 March 2026 00:52:26 +0000 (0:00:01.970) 0:00:36.081 ********* 2026-03-17 00:54:12.437285 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:12.437288 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:54:12.437291 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:54:12.437294 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:12.437297 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:12.437300 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:54:12.437303 | orchestrator | 2026-03-17 00:54:12.437307 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-17 00:54:12.437310 | orchestrator | 2026-03-17 00:54:12.437313 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-17 00:54:12.437316 | orchestrator | Tuesday 17 March 2026 00:52:51 +0000 (0:00:24.270) 0:01:00.352 ********* 2026-03-17 00:54:12.437319 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:54:12.437322 | orchestrator | 2026-03-17 00:54:12.437325 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-17 00:54:12.437328 | orchestrator | Tuesday 17 March 2026 00:52:52 +0000 (0:00:00.784) 0:01:01.136 ********* 2026-03-17 00:54:12.437331 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:54:12.437334 | orchestrator | 2026-03-17 00:54:12.437340 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-17 00:54:12.437344 | orchestrator | Tuesday 17 March 2026 00:52:52 +0000 (0:00:00.488) 0:01:01.625 ********* 2026-03-17 00:54:12.437347 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:12.437350 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:12.437353 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:12.437356 | orchestrator | 2026-03-17 00:54:12.437359 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-17 00:54:12.437362 | orchestrator | Tuesday 17 March 2026 00:52:53 +0000 (0:00:01.175) 0:01:02.801 ********* 2026-03-17 00:54:12.437365 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:12.437368 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:12.437371 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:12.437374 | orchestrator | 2026-03-17 00:54:12.437377 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-17 00:54:12.437380 | orchestrator | Tuesday 17 March 2026 00:52:54 +0000 (0:00:00.490) 0:01:03.291 ********* 2026-03-17 00:54:12.437391 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:12.437396 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:12.437401 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:12.437406 | orchestrator | 2026-03-17 00:54:12.437414 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-17 00:54:12.437419 | orchestrator | Tuesday 17 March 2026 00:52:54 +0000 (0:00:00.306) 0:01:03.597 ********* 2026-03-17 00:54:12.437424 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:12.437429 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:12.437434 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:12.437439 | orchestrator | 2026-03-17 00:54:12.437443 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-17 00:54:12.437448 | orchestrator | Tuesday 17 March 2026 00:52:54 +0000 (0:00:00.289) 0:01:03.887 ********* 2026-03-17 00:54:12.437453 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:12.437458 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:12.437462 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:12.437467 | orchestrator | 2026-03-17 00:54:12.437472 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-17 00:54:12.437477 | orchestrator | Tuesday 17 March 2026 00:52:55 +0000 (0:00:00.550) 0:01:04.437 ********* 2026-03-17 00:54:12.437482 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:12.437487 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:12.437492 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:12.437497 | orchestrator | 2026-03-17 00:54:12.437502 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-17 00:54:12.437507 | orchestrator | Tuesday 17 March 2026 00:52:55 +0000 (0:00:00.340) 0:01:04.778 ********* 2026-03-17 00:54:12.437512 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:12.437517 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:12.437521 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:12.437526 | orchestrator | 2026-03-17 00:54:12.437531 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-17 00:54:12.437536 | orchestrator | Tuesday 17 March 2026 00:52:55 +0000 (0:00:00.271) 0:01:05.050 ********* 2026-03-17 00:54:12.437540 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:12.437545 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:12.437550 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:12.437555 | orchestrator | 2026-03-17 00:54:12.437560 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-17 00:54:12.437564 | orchestrator | Tuesday 17 March 2026 00:52:56 +0000 (0:00:00.277) 0:01:05.327 ********* 2026-03-17 00:54:12.437569 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:12.437574 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:12.437579 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:12.437583 | orchestrator | 2026-03-17 00:54:12.437588 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-17 00:54:12.437593 | orchestrator | Tuesday 17 March 2026 00:52:56 +0000 (0:00:00.408) 0:01:05.736 ********* 2026-03-17 00:54:12.437598 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:12.437602 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:12.437607 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:12.437612 | orchestrator | 2026-03-17 00:54:12.437617 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-17 00:54:12.437622 | orchestrator | Tuesday 17 March 2026 00:52:57 +0000 (0:00:00.534) 0:01:06.270 ********* 2026-03-17 00:54:12.437626 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:12.437631 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:12.437636 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:12.437641 | orchestrator | 2026-03-17 00:54:12.437646 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-17 00:54:12.437650 | orchestrator | Tuesday 17 March 2026 00:52:57 +0000 (0:00:00.380) 0:01:06.651 ********* 2026-03-17 00:54:12.437655 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:12.437663 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:12.437668 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:12.437673 | orchestrator | 2026-03-17 00:54:12.437678 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-17 00:54:12.437682 | orchestrator | Tuesday 17 March 2026 00:52:57 +0000 (0:00:00.380) 0:01:07.032 ********* 2026-03-17 00:54:12.437687 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:12.437692 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:12.437696 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:12.437701 | orchestrator | 2026-03-17 00:54:12.437706 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-17 00:54:12.437711 | orchestrator | Tuesday 17 March 2026 00:52:58 +0000 (0:00:00.742) 0:01:07.774 ********* 2026-03-17 00:54:12.437715 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:12.437720 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:12.437725 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:12.437730 | orchestrator | 2026-03-17 00:54:12.437734 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-17 00:54:12.437739 | orchestrator | Tuesday 17 March 2026 00:52:59 +0000 (0:00:00.532) 0:01:08.307 ********* 2026-03-17 00:54:12.437744 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:12.437749 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:12.437753 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:12.437758 | orchestrator | 2026-03-17 00:54:12.437766 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-17 00:54:12.437771 | orchestrator | Tuesday 17 March 2026 00:52:59 +0000 (0:00:00.350) 0:01:08.657 ********* 2026-03-17 00:54:12.437776 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:12.437780 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:12.437785 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:12.437790 | orchestrator | 2026-03-17 00:54:12.437794 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-17 00:54:12.437799 | orchestrator | Tuesday 17 March 2026 00:53:00 +0000 (0:00:00.600) 0:01:09.258 ********* 2026-03-17 00:54:12.437804 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:12.437809 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:12.437814 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:12.437819 | orchestrator | 2026-03-17 00:54:12.437824 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-17 00:54:12.437829 | orchestrator | Tuesday 17 March 2026 00:53:00 +0000 (0:00:00.638) 0:01:09.897 ********* 2026-03-17 00:54:12.437838 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:54:12.437844 | orchestrator | 2026-03-17 00:54:12.437849 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-17 00:54:12.437854 | orchestrator | Tuesday 17 March 2026 00:53:01 +0000 (0:00:01.188) 0:01:11.085 ********* 2026-03-17 00:54:12.437859 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:12.437864 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:12.437870 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:12.437875 | orchestrator | 2026-03-17 00:54:12.437880 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-17 00:54:12.437885 | orchestrator | Tuesday 17 March 2026 00:53:02 +0000 (0:00:00.627) 0:01:11.713 ********* 2026-03-17 00:54:12.437889 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:12.437893 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:12.437898 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:12.437902 | orchestrator | 2026-03-17 00:54:12.437907 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-17 00:54:12.437912 | orchestrator | Tuesday 17 March 2026 00:53:03 +0000 (0:00:00.589) 0:01:12.303 ********* 2026-03-17 00:54:12.437917 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:12.437923 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:12.437932 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:12.437937 | orchestrator | 2026-03-17 00:54:12.437942 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-17 00:54:12.437946 | orchestrator | Tuesday 17 March 2026 00:53:03 +0000 (0:00:00.500) 0:01:12.803 ********* 2026-03-17 00:54:12.437951 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:12.437956 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:12.437961 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:12.437965 | orchestrator | 2026-03-17 00:54:12.437971 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-17 00:54:12.437976 | orchestrator | Tuesday 17 March 2026 00:53:04 +0000 (0:00:00.563) 0:01:13.367 ********* 2026-03-17 00:54:12.437981 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:12.437987 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:12.437990 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:12.437994 | orchestrator | 2026-03-17 00:54:12.437999 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-17 00:54:12.438004 | orchestrator | Tuesday 17 March 2026 00:53:04 +0000 (0:00:00.359) 0:01:13.727 ********* 2026-03-17 00:54:12.438009 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:12.438040 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:12.438043 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:12.438046 | orchestrator | 2026-03-17 00:54:12.438049 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-17 00:54:12.438052 | orchestrator | Tuesday 17 March 2026 00:53:04 +0000 (0:00:00.273) 0:01:14.000 ********* 2026-03-17 00:54:12.438055 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:12.438058 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:12.438062 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:12.438065 | orchestrator | 2026-03-17 00:54:12.438068 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-17 00:54:12.438071 | orchestrator | Tuesday 17 March 2026 00:53:05 +0000 (0:00:00.433) 0:01:14.434 ********* 2026-03-17 00:54:12.438074 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:12.438077 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:12.438080 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:12.438084 | orchestrator | 2026-03-17 00:54:12.438102 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-17 00:54:12.438106 | orchestrator | Tuesday 17 March 2026 00:53:05 +0000 (0:00:00.549) 0:01:14.983 ********* 2026-03-17 00:54:12.438113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'ena2026-03-17 00:54:12 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:12.438136 | orchestrator | bled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438185 | orchestrator | 2026-03-17 00:54:12.438190 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-17 00:54:12.438195 | orchestrator | Tuesday 17 March 2026 00:53:07 +0000 (0:00:01.662) 0:01:16.646 ********* 2026-03-17 00:54:12.438198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438235 | orchestrator | 2026-03-17 00:54:12.438240 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-17 00:54:12.438246 | orchestrator | Tuesday 17 March 2026 00:53:11 +0000 (0:00:03.565) 0:01:20.212 ********* 2026-03-17 00:54:12.438250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438289 | orchestrator | 2026-03-17 00:54:12.438292 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-17 00:54:12.438295 | orchestrator | Tuesday 17 March 2026 00:53:13 +0000 (0:00:02.049) 0:01:22.261 ********* 2026-03-17 00:54:12.438298 | orchestrator | 2026-03-17 00:54:12.438302 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-17 00:54:12.438305 | orchestrator | Tuesday 17 March 2026 00:53:13 +0000 (0:00:00.060) 0:01:22.322 ********* 2026-03-17 00:54:12.438309 | orchestrator | 2026-03-17 00:54:12.438315 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-17 00:54:12.438319 | orchestrator | Tuesday 17 March 2026 00:53:13 +0000 (0:00:00.064) 0:01:22.386 ********* 2026-03-17 00:54:12.438322 | orchestrator | 2026-03-17 00:54:12.438325 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-17 00:54:12.438328 | orchestrator | Tuesday 17 March 2026 00:53:13 +0000 (0:00:00.065) 0:01:22.452 ********* 2026-03-17 00:54:12.438331 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:12.438334 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:12.438337 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:12.438340 | orchestrator | 2026-03-17 00:54:12.438343 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-17 00:54:12.438348 | orchestrator | Tuesday 17 March 2026 00:53:19 +0000 (0:00:06.540) 0:01:28.992 ********* 2026-03-17 00:54:12.438351 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:12.438356 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:12.438361 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:12.438366 | orchestrator | 2026-03-17 00:54:12.438371 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-17 00:54:12.438376 | orchestrator | Tuesday 17 March 2026 00:53:26 +0000 (0:00:06.685) 0:01:35.678 ********* 2026-03-17 00:54:12.438382 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:12.438387 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:12.438392 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:12.438396 | orchestrator | 2026-03-17 00:54:12.438401 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-17 00:54:12.438407 | orchestrator | Tuesday 17 March 2026 00:53:35 +0000 (0:00:08.445) 0:01:44.124 ********* 2026-03-17 00:54:12.438412 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:12.438417 | orchestrator | 2026-03-17 00:54:12.438423 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-17 00:54:12.438428 | orchestrator | Tuesday 17 March 2026 00:53:35 +0000 (0:00:00.101) 0:01:44.225 ********* 2026-03-17 00:54:12.438433 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:12.438439 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:12.438448 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:12.438454 | orchestrator | 2026-03-17 00:54:12.438458 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-17 00:54:12.438461 | orchestrator | Tuesday 17 March 2026 00:53:36 +0000 (0:00:01.055) 0:01:45.281 ********* 2026-03-17 00:54:12.438464 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:12.438467 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:12.438470 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:12.438473 | orchestrator | 2026-03-17 00:54:12.438476 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-17 00:54:12.438479 | orchestrator | Tuesday 17 March 2026 00:53:36 +0000 (0:00:00.735) 0:01:46.017 ********* 2026-03-17 00:54:12.438482 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:12.438486 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:12.438489 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:12.438492 | orchestrator | 2026-03-17 00:54:12.438495 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-17 00:54:12.438498 | orchestrator | Tuesday 17 March 2026 00:53:37 +0000 (0:00:00.753) 0:01:46.770 ********* 2026-03-17 00:54:12.438501 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:12.438506 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:12.438515 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:12.438519 | orchestrator | 2026-03-17 00:54:12.438523 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-17 00:54:12.438526 | orchestrator | Tuesday 17 March 2026 00:53:38 +0000 (0:00:00.739) 0:01:47.510 ********* 2026-03-17 00:54:12.438529 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:12.438534 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:12.438540 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:12.438543 | orchestrator | 2026-03-17 00:54:12.438546 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-17 00:54:12.438549 | orchestrator | Tuesday 17 March 2026 00:53:39 +0000 (0:00:00.766) 0:01:48.277 ********* 2026-03-17 00:54:12.438552 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:12.438556 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:12.438559 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:12.438562 | orchestrator | 2026-03-17 00:54:12.438567 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-17 00:54:12.438574 | orchestrator | Tuesday 17 March 2026 00:53:39 +0000 (0:00:00.765) 0:01:49.043 ********* 2026-03-17 00:54:12.438577 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:12.438580 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:12.438583 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:12.438589 | orchestrator | 2026-03-17 00:54:12.438592 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-17 00:54:12.438595 | orchestrator | Tuesday 17 March 2026 00:53:40 +0000 (0:00:00.309) 0:01:49.353 ********* 2026-03-17 00:54:12.438599 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438602 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438605 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438609 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438612 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438618 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438621 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438627 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438633 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438643 | orchestrator | 2026-03-17 00:54:12.438647 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-17 00:54:12.438650 | orchestrator | Tuesday 17 March 2026 00:53:41 +0000 (0:00:01.464) 0:01:50.817 ********* 2026-03-17 00:54:12.438653 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438658 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438664 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438668 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438680 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438695 | orchestrator | 2026-03-17 00:54:12.438698 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-17 00:54:12.438702 | orchestrator | Tuesday 17 March 2026 00:53:45 +0000 (0:00:03.565) 0:01:54.383 ********* 2026-03-17 00:54:12.438705 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438708 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438711 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438715 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438739 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:12.438744 | orchestrator | 2026-03-17 00:54:12.438749 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-17 00:54:12.438752 | orchestrator | Tuesday 17 March 2026 00:53:47 +0000 (0:00:02.516) 0:01:56.900 ********* 2026-03-17 00:54:12.438756 | orchestrator | 2026-03-17 00:54:12.438759 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-17 00:54:12.438762 | orchestrator | Tuesday 17 March 2026 00:53:47 +0000 (0:00:00.068) 0:01:56.969 ********* 2026-03-17 00:54:12.438765 | orchestrator | 2026-03-17 00:54:12.438768 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-17 00:54:12.438771 | orchestrator | Tuesday 17 March 2026 00:53:47 +0000 (0:00:00.072) 0:01:57.041 ********* 2026-03-17 00:54:12.438775 | orchestrator | 2026-03-17 00:54:12.438780 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-17 00:54:12.438786 | orchestrator | Tuesday 17 March 2026 00:53:48 +0000 (0:00:00.073) 0:01:57.115 ********* 2026-03-17 00:54:12.438790 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:12.438793 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:12.438796 | orchestrator | 2026-03-17 00:54:12.438799 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-17 00:54:12.438802 | orchestrator | Tuesday 17 March 2026 00:53:54 +0000 (0:00:06.144) 0:02:03.260 ********* 2026-03-17 00:54:12.438805 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:12.438809 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:12.438812 | orchestrator | 2026-03-17 00:54:12.438815 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-17 00:54:12.438818 | orchestrator | Tuesday 17 March 2026 00:54:00 +0000 (0:00:06.329) 0:02:09.590 ********* 2026-03-17 00:54:12.438821 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:12.438824 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:12.438827 | orchestrator | 2026-03-17 00:54:12.438830 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-17 00:54:12.438834 | orchestrator | Tuesday 17 March 2026 00:54:06 +0000 (0:00:06.365) 0:02:15.956 ********* 2026-03-17 00:54:12.438839 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:12.438845 | orchestrator | 2026-03-17 00:54:12.438849 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-17 00:54:12.438852 | orchestrator | Tuesday 17 March 2026 00:54:06 +0000 (0:00:00.121) 0:02:16.078 ********* 2026-03-17 00:54:12.438855 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:12.438858 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:12.438861 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:12.438864 | orchestrator | 2026-03-17 00:54:12.438867 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-17 00:54:12.438870 | orchestrator | Tuesday 17 March 2026 00:54:07 +0000 (0:00:00.815) 0:02:16.893 ********* 2026-03-17 00:54:12.438873 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:12.438876 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:12.438880 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:12.438883 | orchestrator | 2026-03-17 00:54:12.438888 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-17 00:54:12.438894 | orchestrator | Tuesday 17 March 2026 00:54:08 +0000 (0:00:00.684) 0:02:17.578 ********* 2026-03-17 00:54:12.438897 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:12.438900 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:12.438903 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:12.438906 | orchestrator | 2026-03-17 00:54:12.438909 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-17 00:54:12.438912 | orchestrator | Tuesday 17 March 2026 00:54:09 +0000 (0:00:00.860) 0:02:18.439 ********* 2026-03-17 00:54:12.438916 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:12.438919 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:12.438923 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:12.438926 | orchestrator | 2026-03-17 00:54:12.438929 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-17 00:54:12.438933 | orchestrator | Tuesday 17 March 2026 00:54:09 +0000 (0:00:00.652) 0:02:19.091 ********* 2026-03-17 00:54:12.438936 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:12.438939 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:12.438942 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:12.438945 | orchestrator | 2026-03-17 00:54:12.438948 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-17 00:54:12.438952 | orchestrator | Tuesday 17 March 2026 00:54:10 +0000 (0:00:00.974) 0:02:20.065 ********* 2026-03-17 00:54:12.438958 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:12.438963 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:12.438967 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:12.438970 | orchestrator | 2026-03-17 00:54:12.438973 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:54:12.438976 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-17 00:54:12.438982 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-17 00:54:12.438985 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-17 00:54:12.438988 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:54:12.438991 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:54:12.438994 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:54:12.438997 | orchestrator | 2026-03-17 00:54:12.439001 | orchestrator | 2026-03-17 00:54:12.439004 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:54:12.439009 | orchestrator | Tuesday 17 March 2026 00:54:11 +0000 (0:00:00.872) 0:02:20.937 ********* 2026-03-17 00:54:12.439012 | orchestrator | =============================================================================== 2026-03-17 00:54:12.439015 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 24.27s 2026-03-17 00:54:12.439018 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 22.01s 2026-03-17 00:54:12.439023 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.81s 2026-03-17 00:54:12.439028 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.02s 2026-03-17 00:54:12.439032 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 12.69s 2026-03-17 00:54:12.439037 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.57s 2026-03-17 00:54:12.439043 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.57s 2026-03-17 00:54:12.439046 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.52s 2026-03-17 00:54:12.439049 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.35s 2026-03-17 00:54:12.439052 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.05s 2026-03-17 00:54:12.439055 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.97s 2026-03-17 00:54:12.439058 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.70s 2026-03-17 00:54:12.439061 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.66s 2026-03-17 00:54:12.439064 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.46s 2026-03-17 00:54:12.439070 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.38s 2026-03-17 00:54:12.439073 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.33s 2026-03-17 00:54:12.439076 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 1.19s 2026-03-17 00:54:12.439079 | orchestrator | ovn-db : Checking for any existing OVN DB container volumes ------------- 1.18s 2026-03-17 00:54:12.439082 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.10s 2026-03-17 00:54:12.439108 | orchestrator | ovn-db : Get OVN_Northbound cluster leader ------------------------------ 1.06s 2026-03-17 00:54:15.473680 | orchestrator | 2026-03-17 00:54:15 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:54:15.475416 | orchestrator | 2026-03-17 00:54:15 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:54:15.475457 | orchestrator | 2026-03-17 00:54:15 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:18.504067 | orchestrator | 2026-03-17 00:54:18 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:54:18.506341 | orchestrator | 2026-03-17 00:54:18 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:54:18.506612 | orchestrator | 2026-03-17 00:54:18 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:21.544376 | orchestrator | 2026-03-17 00:54:21 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:54:21.549408 | orchestrator | 2026-03-17 00:54:21 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:54:21.549470 | orchestrator | 2026-03-17 00:54:21 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:24.573752 | orchestrator | 2026-03-17 00:54:24 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:54:24.574202 | orchestrator | 2026-03-17 00:54:24 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:54:24.574219 | orchestrator | 2026-03-17 00:54:24 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:27.603327 | orchestrator | 2026-03-17 00:54:27 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:54:27.605352 | orchestrator | 2026-03-17 00:54:27 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:54:27.605441 | orchestrator | 2026-03-17 00:54:27 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:30.632422 | orchestrator | 2026-03-17 00:54:30 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:54:30.633751 | orchestrator | 2026-03-17 00:54:30 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:54:30.633807 | orchestrator | 2026-03-17 00:54:30 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:33.659918 | orchestrator | 2026-03-17 00:54:33 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:54:33.662645 | orchestrator | 2026-03-17 00:54:33 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:54:33.662710 | orchestrator | 2026-03-17 00:54:33 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:36.689690 | orchestrator | 2026-03-17 00:54:36 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:54:36.690484 | orchestrator | 2026-03-17 00:54:36 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:54:36.690532 | orchestrator | 2026-03-17 00:54:36 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:39.720812 | orchestrator | 2026-03-17 00:54:39 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:54:39.721841 | orchestrator | 2026-03-17 00:54:39 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:54:39.721881 | orchestrator | 2026-03-17 00:54:39 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:42.739942 | orchestrator | 2026-03-17 00:54:42 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:54:42.740447 | orchestrator | 2026-03-17 00:54:42 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:54:42.740500 | orchestrator | 2026-03-17 00:54:42 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:45.782589 | orchestrator | 2026-03-17 00:54:45 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:54:45.783797 | orchestrator | 2026-03-17 00:54:45 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:54:45.783836 | orchestrator | 2026-03-17 00:54:45 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:48.822668 | orchestrator | 2026-03-17 00:54:48 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:54:48.825612 | orchestrator | 2026-03-17 00:54:48 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:54:48.825681 | orchestrator | 2026-03-17 00:54:48 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:51.857205 | orchestrator | 2026-03-17 00:54:51 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:54:51.859277 | orchestrator | 2026-03-17 00:54:51 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:54:51.859339 | orchestrator | 2026-03-17 00:54:51 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:54.893170 | orchestrator | 2026-03-17 00:54:54 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:54:54.894558 | orchestrator | 2026-03-17 00:54:54 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:54:54.895004 | orchestrator | 2026-03-17 00:54:54 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:57.930616 | orchestrator | 2026-03-17 00:54:57 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:54:57.932379 | orchestrator | 2026-03-17 00:54:57 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:54:57.932450 | orchestrator | 2026-03-17 00:54:57 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:00.981832 | orchestrator | 2026-03-17 00:55:00 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:55:00.983344 | orchestrator | 2026-03-17 00:55:00 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:55:00.983404 | orchestrator | 2026-03-17 00:55:00 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:04.026874 | orchestrator | 2026-03-17 00:55:04 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:55:04.027695 | orchestrator | 2026-03-17 00:55:04 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:55:04.027721 | orchestrator | 2026-03-17 00:55:04 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:07.069632 | orchestrator | 2026-03-17 00:55:07 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:55:07.071672 | orchestrator | 2026-03-17 00:55:07 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:55:07.072018 | orchestrator | 2026-03-17 00:55:07 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:10.135017 | orchestrator | 2026-03-17 00:55:10 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:55:10.135883 | orchestrator | 2026-03-17 00:55:10 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:55:10.136107 | orchestrator | 2026-03-17 00:55:10 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:13.177933 | orchestrator | 2026-03-17 00:55:13 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:55:13.178760 | orchestrator | 2026-03-17 00:55:13 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:55:13.178800 | orchestrator | 2026-03-17 00:55:13 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:16.220684 | orchestrator | 2026-03-17 00:55:16 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:55:16.222931 | orchestrator | 2026-03-17 00:55:16 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:55:16.222979 | orchestrator | 2026-03-17 00:55:16 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:19.269563 | orchestrator | 2026-03-17 00:55:19 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:55:19.272437 | orchestrator | 2026-03-17 00:55:19 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:55:19.272714 | orchestrator | 2026-03-17 00:55:19 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:22.309092 | orchestrator | 2026-03-17 00:55:22 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:55:22.311160 | orchestrator | 2026-03-17 00:55:22 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:55:22.311215 | orchestrator | 2026-03-17 00:55:22 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:25.347800 | orchestrator | 2026-03-17 00:55:25 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:55:25.349501 | orchestrator | 2026-03-17 00:55:25 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:55:25.349847 | orchestrator | 2026-03-17 00:55:25 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:28.392322 | orchestrator | 2026-03-17 00:55:28 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:55:28.394292 | orchestrator | 2026-03-17 00:55:28 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:55:28.394366 | orchestrator | 2026-03-17 00:55:28 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:31.436190 | orchestrator | 2026-03-17 00:55:31 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:55:31.437119 | orchestrator | 2026-03-17 00:55:31 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:55:31.437167 | orchestrator | 2026-03-17 00:55:31 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:34.473881 | orchestrator | 2026-03-17 00:55:34 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:55:34.475099 | orchestrator | 2026-03-17 00:55:34 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:55:34.475189 | orchestrator | 2026-03-17 00:55:34 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:37.522487 | orchestrator | 2026-03-17 00:55:37 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:55:37.525007 | orchestrator | 2026-03-17 00:55:37 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:55:37.525079 | orchestrator | 2026-03-17 00:55:37 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:40.557374 | orchestrator | 2026-03-17 00:55:40 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:55:40.560643 | orchestrator | 2026-03-17 00:55:40 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:55:40.560699 | orchestrator | 2026-03-17 00:55:40 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:43.608510 | orchestrator | 2026-03-17 00:55:43 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:55:43.610431 | orchestrator | 2026-03-17 00:55:43 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:55:43.610515 | orchestrator | 2026-03-17 00:55:43 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:46.644118 | orchestrator | 2026-03-17 00:55:46 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:55:46.645658 | orchestrator | 2026-03-17 00:55:46 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:55:46.645772 | orchestrator | 2026-03-17 00:55:46 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:49.676160 | orchestrator | 2026-03-17 00:55:49 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:55:49.676663 | orchestrator | 2026-03-17 00:55:49 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:55:49.676684 | orchestrator | 2026-03-17 00:55:49 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:52.726091 | orchestrator | 2026-03-17 00:55:52 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:55:52.728358 | orchestrator | 2026-03-17 00:55:52 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:55:52.728405 | orchestrator | 2026-03-17 00:55:52 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:55.775166 | orchestrator | 2026-03-17 00:55:55 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:55:55.775298 | orchestrator | 2026-03-17 00:55:55 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:55:55.775636 | orchestrator | 2026-03-17 00:55:55 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:58.818489 | orchestrator | 2026-03-17 00:55:58 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:55:58.819800 | orchestrator | 2026-03-17 00:55:58 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:55:58.819850 | orchestrator | 2026-03-17 00:55:58 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:01.865000 | orchestrator | 2026-03-17 00:56:01 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:56:01.865122 | orchestrator | 2026-03-17 00:56:01 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:56:01.865139 | orchestrator | 2026-03-17 00:56:01 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:04.902694 | orchestrator | 2026-03-17 00:56:04 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:56:04.904219 | orchestrator | 2026-03-17 00:56:04 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:56:04.904681 | orchestrator | 2026-03-17 00:56:04 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:07.945395 | orchestrator | 2026-03-17 00:56:07 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:56:07.946134 | orchestrator | 2026-03-17 00:56:07 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:56:07.946217 | orchestrator | 2026-03-17 00:56:07 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:10.992533 | orchestrator | 2026-03-17 00:56:10 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:56:10.995846 | orchestrator | 2026-03-17 00:56:10 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:56:10.995901 | orchestrator | 2026-03-17 00:56:10 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:14.047203 | orchestrator | 2026-03-17 00:56:14 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:56:14.049101 | orchestrator | 2026-03-17 00:56:14 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:56:14.049197 | orchestrator | 2026-03-17 00:56:14 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:17.090994 | orchestrator | 2026-03-17 00:56:17 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:56:17.092537 | orchestrator | 2026-03-17 00:56:17 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:56:17.092847 | orchestrator | 2026-03-17 00:56:17 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:20.144746 | orchestrator | 2026-03-17 00:56:20 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:56:20.145966 | orchestrator | 2026-03-17 00:56:20 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:56:20.146131 | orchestrator | 2026-03-17 00:56:20 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:23.182620 | orchestrator | 2026-03-17 00:56:23 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:56:23.183404 | orchestrator | 2026-03-17 00:56:23 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:56:23.183456 | orchestrator | 2026-03-17 00:56:23 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:26.223325 | orchestrator | 2026-03-17 00:56:26 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:56:26.223455 | orchestrator | 2026-03-17 00:56:26 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:56:26.223482 | orchestrator | 2026-03-17 00:56:26 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:29.263885 | orchestrator | 2026-03-17 00:56:29 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:56:29.265367 | orchestrator | 2026-03-17 00:56:29 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:56:29.265449 | orchestrator | 2026-03-17 00:56:29 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:32.313161 | orchestrator | 2026-03-17 00:56:32 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:56:32.313259 | orchestrator | 2026-03-17 00:56:32 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:56:32.313369 | orchestrator | 2026-03-17 00:56:32 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:35.351458 | orchestrator | 2026-03-17 00:56:35 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:56:35.353715 | orchestrator | 2026-03-17 00:56:35 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state STARTED 2026-03-17 00:56:35.353826 | orchestrator | 2026-03-17 00:56:35 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:38.393291 | orchestrator | 2026-03-17 00:56:38 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:56:38.401588 | orchestrator | 2026-03-17 00:56:38 | INFO  | Task 8aff811c-1ff4-42b8-be75-8b1396e894c0 is in state SUCCESS 2026-03-17 00:56:38.403378 | orchestrator | 2026-03-17 00:56:38.403460 | orchestrator | 2026-03-17 00:56:38.403476 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 00:56:38.403484 | orchestrator | 2026-03-17 00:56:38.403491 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 00:56:38.403498 | orchestrator | Tuesday 17 March 2026 00:50:43 +0000 (0:00:00.430) 0:00:00.430 ********* 2026-03-17 00:56:38.403504 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:38.403512 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:38.403519 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:38.403526 | orchestrator | 2026-03-17 00:56:38.403532 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 00:56:38.403539 | orchestrator | Tuesday 17 March 2026 00:50:44 +0000 (0:00:00.475) 0:00:00.906 ********* 2026-03-17 00:56:38.403546 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-17 00:56:38.403552 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-17 00:56:38.403558 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-17 00:56:38.403564 | orchestrator | 2026-03-17 00:56:38.403570 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-17 00:56:38.403576 | orchestrator | 2026-03-17 00:56:38.403582 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-17 00:56:38.403589 | orchestrator | Tuesday 17 March 2026 00:50:44 +0000 (0:00:00.429) 0:00:01.336 ********* 2026-03-17 00:56:38.403595 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:56:38.403602 | orchestrator | 2026-03-17 00:56:38.403608 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-17 00:56:38.403614 | orchestrator | Tuesday 17 March 2026 00:50:45 +0000 (0:00:00.705) 0:00:02.041 ********* 2026-03-17 00:56:38.403621 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:38.403627 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:38.403633 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:38.403639 | orchestrator | 2026-03-17 00:56:38.403646 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-17 00:56:38.403652 | orchestrator | Tuesday 17 March 2026 00:50:46 +0000 (0:00:00.863) 0:00:02.904 ********* 2026-03-17 00:56:38.403658 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:56:38.403664 | orchestrator | 2026-03-17 00:56:38.403671 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-17 00:56:38.403677 | orchestrator | Tuesday 17 March 2026 00:50:47 +0000 (0:00:01.074) 0:00:03.979 ********* 2026-03-17 00:56:38.403683 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:38.403689 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:38.403696 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:38.403702 | orchestrator | 2026-03-17 00:56:38.403708 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-17 00:56:38.403715 | orchestrator | Tuesday 17 March 2026 00:50:48 +0000 (0:00:00.882) 0:00:04.861 ********* 2026-03-17 00:56:38.403721 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-17 00:56:38.403727 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-17 00:56:38.403733 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-17 00:56:38.403740 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-17 00:56:38.403746 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-17 00:56:38.403973 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-17 00:56:38.403981 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-17 00:56:38.404002 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-17 00:56:38.404009 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-17 00:56:38.404032 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-17 00:56:38.404039 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-17 00:56:38.404045 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-17 00:56:38.404051 | orchestrator | 2026-03-17 00:56:38.404058 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-17 00:56:38.404063 | orchestrator | Tuesday 17 March 2026 00:50:50 +0000 (0:00:02.523) 0:00:07.384 ********* 2026-03-17 00:56:38.404070 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-17 00:56:38.404077 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-17 00:56:38.404084 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-17 00:56:38.404090 | orchestrator | 2026-03-17 00:56:38.404097 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-17 00:56:38.404103 | orchestrator | Tuesday 17 March 2026 00:50:51 +0000 (0:00:00.715) 0:00:08.099 ********* 2026-03-17 00:56:38.404109 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-17 00:56:38.404116 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-17 00:56:38.404123 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-17 00:56:38.404131 | orchestrator | 2026-03-17 00:56:38.404138 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-17 00:56:38.404144 | orchestrator | Tuesday 17 March 2026 00:50:52 +0000 (0:00:01.407) 0:00:09.507 ********* 2026-03-17 00:56:38.404150 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-17 00:56:38.404157 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.404176 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-17 00:56:38.404182 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.404189 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-17 00:56:38.404195 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.404201 | orchestrator | 2026-03-17 00:56:38.404207 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-17 00:56:38.404214 | orchestrator | Tuesday 17 March 2026 00:50:53 +0000 (0:00:00.656) 0:00:10.164 ********* 2026-03-17 00:56:38.404223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-17 00:56:38.404236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-17 00:56:38.404261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-17 00:56:38.404272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 00:56:38.404280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 00:56:38.404292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 00:56:38.404300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 00:56:38.404307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 00:56:38.404314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 00:56:38.404325 | orchestrator | 2026-03-17 00:56:38.404332 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-17 00:56:38.404338 | orchestrator | Tuesday 17 March 2026 00:50:55 +0000 (0:00:01.772) 0:00:11.936 ********* 2026-03-17 00:56:38.404345 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.404351 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.404357 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.404363 | orchestrator | 2026-03-17 00:56:38.404370 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-17 00:56:38.404377 | orchestrator | Tuesday 17 March 2026 00:50:56 +0000 (0:00:00.844) 0:00:12.781 ********* 2026-03-17 00:56:38.404383 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-17 00:56:38.404389 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-17 00:56:38.404510 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-17 00:56:38.404518 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-17 00:56:38.404524 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-17 00:56:38.404530 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-17 00:56:38.404537 | orchestrator | 2026-03-17 00:56:38.404543 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-17 00:56:38.404550 | orchestrator | Tuesday 17 March 2026 00:50:58 +0000 (0:00:02.331) 0:00:15.112 ********* 2026-03-17 00:56:38.404556 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.404572 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.404578 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.404592 | orchestrator | 2026-03-17 00:56:38.404603 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-17 00:56:38.404609 | orchestrator | Tuesday 17 March 2026 00:51:00 +0000 (0:00:01.929) 0:00:17.042 ********* 2026-03-17 00:56:38.404616 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:38.404623 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:38.404629 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:38.404635 | orchestrator | 2026-03-17 00:56:38.404642 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-17 00:56:38.404648 | orchestrator | Tuesday 17 March 2026 00:51:02 +0000 (0:00:02.050) 0:00:19.093 ********* 2026-03-17 00:56:38.404655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-17 00:56:38.404670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:56:38.404683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-17 00:56:38.404690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:56:38.404697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:56:38.404708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-17 00:56:38.404715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:56:38.404722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__63c8a770ad9947416530ae6f9dfb6bfe592de171', '__omit_place_holder__63c8a770ad9947416530ae6f9dfb6bfe592de171'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-17 00:56:38.404988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:56:38.405007 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.405062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:56:38.405070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__63c8a770ad9947416530ae6f9dfb6bfe592de171', '__omit_place_holder__63c8a770ad9947416530ae6f9dfb6bfe592de171'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-17 00:56:38.405077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__63c8a770ad9947416530ae6f9dfb6bfe592de171', '__omit_place_holder__63c8a770ad9947416530ae6f9dfb6bfe592de171'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-17 00:56:38.405083 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.405090 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.405096 | orchestrator | 2026-03-17 00:56:38.405103 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-17 00:56:38.405109 | orchestrator | Tuesday 17 March 2026 00:51:03 +0000 (0:00:01.266) 0:00:20.360 ********* 2026-03-17 00:56:38.405117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-17 00:56:38.405195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-17 00:56:38.405289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-17 00:56:38.405299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 00:56:38.405306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:56:38.405313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__63c8a770ad9947416530ae6f9dfb6bfe592de171', '__omit_place_holder__63c8a770ad9947416530ae6f9dfb6bfe592de171'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-17 00:56:38.405323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 00:56:38.405330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 00:56:38.405360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:56:38.405367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__63c8a770ad9947416530ae6f9dfb6bfe592de171', '__omit_place_holder__63c8a770ad9947416530ae6f9dfb6bfe592de171'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-17 00:56:38.405374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:56:38.405381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__63c8a770ad9947416530ae6f9dfb6bfe592de171', '__omit_place_holder__63c8a770ad9947416530ae6f9dfb6bfe592de171'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-17 00:56:38.405389 | orchestrator | 2026-03-17 00:56:38.405395 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-17 00:56:38.405401 | orchestrator | Tuesday 17 March 2026 00:51:06 +0000 (0:00:03.174) 0:00:23.535 ********* 2026-03-17 00:56:38.405411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-17 00:56:38.405418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-17 00:56:38.405620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-17 00:56:38.405630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 00:56:38.405638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 00:56:38.405644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 00:56:38.405655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 00:56:38.405662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 00:56:38.405675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 00:56:38.405682 | orchestrator | 2026-03-17 00:56:38.405688 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-17 00:56:38.405716 | orchestrator | Tuesday 17 March 2026 00:51:09 +0000 (0:00:02.910) 0:00:26.445 ********* 2026-03-17 00:56:38.405724 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-17 00:56:38.405748 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-17 00:56:38.405755 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-17 00:56:38.405815 | orchestrator | 2026-03-17 00:56:38.405821 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-17 00:56:38.405828 | orchestrator | Tuesday 17 March 2026 00:51:11 +0000 (0:00:02.148) 0:00:28.594 ********* 2026-03-17 00:56:38.405835 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-17 00:56:38.405841 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-17 00:56:38.405847 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-17 00:56:38.405887 | orchestrator | 2026-03-17 00:56:38.405896 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-17 00:56:38.405902 | orchestrator | Tuesday 17 March 2026 00:51:17 +0000 (0:00:05.744) 0:00:34.338 ********* 2026-03-17 00:56:38.406147 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.406163 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.406170 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.406176 | orchestrator | 2026-03-17 00:56:38.406183 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-17 00:56:38.406190 | orchestrator | Tuesday 17 March 2026 00:51:18 +0000 (0:00:00.533) 0:00:34.871 ********* 2026-03-17 00:56:38.406197 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-17 00:56:38.406206 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-17 00:56:38.406212 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-17 00:56:38.406218 | orchestrator | 2026-03-17 00:56:38.406225 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-17 00:56:38.406231 | orchestrator | Tuesday 17 March 2026 00:51:20 +0000 (0:00:02.147) 0:00:37.020 ********* 2026-03-17 00:56:38.406238 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-17 00:56:38.406244 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-17 00:56:38.406250 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-17 00:56:38.406257 | orchestrator | 2026-03-17 00:56:38.406263 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-17 00:56:38.406269 | orchestrator | Tuesday 17 March 2026 00:51:22 +0000 (0:00:02.247) 0:00:39.267 ********* 2026-03-17 00:56:38.406275 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-17 00:56:38.406293 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-17 00:56:38.406300 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-17 00:56:38.406307 | orchestrator | 2026-03-17 00:56:38.406313 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-17 00:56:38.406320 | orchestrator | Tuesday 17 March 2026 00:51:24 +0000 (0:00:01.913) 0:00:41.181 ********* 2026-03-17 00:56:38.406326 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-17 00:56:38.406333 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-17 00:56:38.406339 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-17 00:56:38.406455 | orchestrator | 2026-03-17 00:56:38.406471 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-17 00:56:38.406477 | orchestrator | Tuesday 17 March 2026 00:51:26 +0000 (0:00:02.168) 0:00:43.349 ********* 2026-03-17 00:56:38.406484 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:56:38.406490 | orchestrator | 2026-03-17 00:56:38.406696 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-17 00:56:38.406705 | orchestrator | Tuesday 17 March 2026 00:51:27 +0000 (0:00:01.362) 0:00:44.711 ********* 2026-03-17 00:56:38.406713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-17 00:56:38.406746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-17 00:56:38.406755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-17 00:56:38.406762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 00:56:38.406769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 00:56:38.406789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 00:56:38.406796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 00:56:38.406803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 00:56:38.406827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 00:56:38.406834 | orchestrator | 2026-03-17 00:56:38.406841 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-17 00:56:38.406847 | orchestrator | Tuesday 17 March 2026 00:51:31 +0000 (0:00:03.289) 0:00:48.001 ********* 2026-03-17 00:56:38.406854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-17 00:56:38.406861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:56:38.406872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:56:38.406878 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.406888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-17 00:56:38.406895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:56:38.406985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:56:38.406995 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.407002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-17 00:56:38.407009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:56:38.407034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:56:38.407041 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.407047 | orchestrator | 2026-03-17 00:56:38.407054 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-17 00:56:38.407060 | orchestrator | Tuesday 17 March 2026 00:51:32 +0000 (0:00:00.761) 0:00:48.763 ********* 2026-03-17 00:56:38.407070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-17 00:56:38.407077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:56:38.407101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:56:38.407107 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.407113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-17 00:56:38.407118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:56:38.407129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:56:38.407135 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.407142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-17 00:56:38.407152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:56:38.407159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:56:38.407166 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.407171 | orchestrator | 2026-03-17 00:56:38.407178 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-17 00:56:38.407184 | orchestrator | Tuesday 17 March 2026 00:51:33 +0000 (0:00:01.520) 0:00:50.284 ********* 2026-03-17 00:56:38.407208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-17 00:56:38.407214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:56:38.407303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:56:38.407311 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.407316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-17 00:56:38.407323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:56:38.407327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:56:38.407332 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.407349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-17 00:56:38.407354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:56:38.407363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:56:38.407367 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.407375 | orchestrator | 2026-03-17 00:56:38.407381 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-17 00:56:38.407388 | orchestrator | Tuesday 17 March 2026 00:51:34 +0000 (0:00:01.088) 0:00:51.373 ********* 2026-03-17 00:56:38.407394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-17 00:56:38.407405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:56:38.407411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:56:38.407418 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.407424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-17 00:56:38.407446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:56:38.407459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:56:38.407465 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.407471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-17 00:56:38.407478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:56:38.407502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:56:38.407509 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.407516 | orchestrator | 2026-03-17 00:56:38.407523 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-17 00:56:38.407528 | orchestrator | Tuesday 17 March 2026 00:51:35 +0000 (0:00:00.995) 0:00:52.368 ********* 2026-03-17 00:56:38.407532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-17 00:56:38.407557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:56:38.407565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:56:38.407572 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.407578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-17 00:56:38.407584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-17 00:56:38.407594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:56:38.407601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:56:38.407628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:56:38.407636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:56:38.407643 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.407649 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.407656 | orchestrator | 2026-03-17 00:56:38.407662 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-17 00:56:38.407668 | orchestrator | Tuesday 17 March 2026 00:51:36 +0000 (0:00:00.701) 0:00:53.069 ********* 2026-03-17 00:56:38.407674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-17 00:56:38.407685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:56:38.407694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:56:38.407700 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.407710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-17 00:56:38.407723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:56:38.407748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:56:38.407756 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.407763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-17 00:56:38.407769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:56:38.407775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:56:38.407781 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.407787 | orchestrator | 2026-03-17 00:56:38.407793 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-17 00:56:38.407799 | orchestrator | Tuesday 17 March 2026 00:51:37 +0000 (0:00:00.993) 0:00:54.063 ********* 2026-03-17 00:56:38.407809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-17 00:56:38.407819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:56:38.407842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:56:38.407850 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.407857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-17 00:56:38.407863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:56:38.407870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:56:38.407876 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.407886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-17 00:56:38.407897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:56:38.407903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:56:38.408180 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.408190 | orchestrator | 2026-03-17 00:56:38.408196 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-17 00:56:38.408268 | orchestrator | Tuesday 17 March 2026 00:51:37 +0000 (0:00:00.626) 0:00:54.689 ********* 2026-03-17 00:56:38.408276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-17 00:56:38.408281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:56:38.408287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:56:38.408294 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.408304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-17 00:56:38.408329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:56:38.408336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:56:38.408342 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.408368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-17 00:56:38.408376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 00:56:38.408382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 00:56:38.408388 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.408395 | orchestrator | 2026-03-17 00:56:38.408400 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-17 00:56:38.408407 | orchestrator | Tuesday 17 March 2026 00:51:39 +0000 (0:00:01.216) 0:00:55.906 ********* 2026-03-17 00:56:38.408411 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-17 00:56:38.408416 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-17 00:56:38.408420 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-17 00:56:38.408428 | orchestrator | 2026-03-17 00:56:38.408432 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-17 00:56:38.408436 | orchestrator | Tuesday 17 March 2026 00:51:41 +0000 (0:00:01.861) 0:00:57.767 ********* 2026-03-17 00:56:38.408440 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-17 00:56:38.408444 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-17 00:56:38.408448 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-17 00:56:38.408451 | orchestrator | 2026-03-17 00:56:38.408455 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-17 00:56:38.408459 | orchestrator | Tuesday 17 March 2026 00:51:42 +0000 (0:00:01.281) 0:00:59.049 ********* 2026-03-17 00:56:38.408466 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-17 00:56:38.408470 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-17 00:56:38.408473 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-17 00:56:38.408477 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-17 00:56:38.408481 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.408485 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-17 00:56:38.408489 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.408492 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-17 00:56:38.408496 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.408500 | orchestrator | 2026-03-17 00:56:38.408503 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-17 00:56:38.408507 | orchestrator | Tuesday 17 March 2026 00:51:43 +0000 (0:00:00.732) 0:00:59.781 ********* 2026-03-17 00:56:38.408523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-17 00:56:38.408528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-17 00:56:38.408532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-17 00:56:38.408539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 00:56:38.408544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 00:56:38.408548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 00:56:38.408552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 00:56:38.408571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 00:56:38.408609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 00:56:38.408619 | orchestrator | 2026-03-17 00:56:38.408625 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-17 00:56:38.408630 | orchestrator | Tuesday 17 March 2026 00:51:45 +0000 (0:00:02.861) 0:01:02.642 ********* 2026-03-17 00:56:38.408642 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:56:38.408647 | orchestrator | 2026-03-17 00:56:38.408653 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-17 00:56:38.408659 | orchestrator | Tuesday 17 March 2026 00:51:46 +0000 (0:00:00.508) 0:01:03.151 ********* 2026-03-17 00:56:38.408668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-17 00:56:38.408678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-17 00:56:38.408685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.408691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.408716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-17 00:56:38.408723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-17 00:56:38.408735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-17 00:56:38.408742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.408746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-17 00:56:38.408750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.408765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.408770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.408778 | orchestrator | 2026-03-17 00:56:38.408782 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-17 00:56:38.408786 | orchestrator | Tuesday 17 March 2026 00:51:49 +0000 (0:00:03.480) 0:01:06.632 ********* 2026-03-17 00:56:38.408790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-17 00:56:38.408794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-17 00:56:38.408801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.408805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.408809 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.408823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-17 00:56:38.408832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-17 00:56:38.408836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.408840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.408844 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.408851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-17 00:56:38.408855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-17 00:56:38.408869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.409144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.409155 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.409160 | orchestrator | 2026-03-17 00:56:38.409165 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-17 00:56:38.409169 | orchestrator | Tuesday 17 March 2026 00:51:51 +0000 (0:00:01.413) 0:01:08.045 ********* 2026-03-17 00:56:38.409174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-17 00:56:38.409182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-17 00:56:38.409189 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.409195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-17 00:56:38.409201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-17 00:56:38.409211 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.409218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-17 00:56:38.409224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-17 00:56:38.409230 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.409236 | orchestrator | 2026-03-17 00:56:38.409242 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-17 00:56:38.409248 | orchestrator | Tuesday 17 March 2026 00:51:52 +0000 (0:00:01.204) 0:01:09.249 ********* 2026-03-17 00:56:38.409254 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.409264 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.409270 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.409275 | orchestrator | 2026-03-17 00:56:38.409281 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-17 00:56:38.409286 | orchestrator | Tuesday 17 March 2026 00:51:53 +0000 (0:00:01.165) 0:01:10.415 ********* 2026-03-17 00:56:38.409292 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.409298 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.409304 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.409310 | orchestrator | 2026-03-17 00:56:38.409317 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-17 00:56:38.409321 | orchestrator | Tuesday 17 March 2026 00:51:55 +0000 (0:00:02.000) 0:01:12.416 ********* 2026-03-17 00:56:38.409325 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:56:38.409328 | orchestrator | 2026-03-17 00:56:38.409338 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-17 00:56:38.409342 | orchestrator | Tuesday 17 March 2026 00:51:56 +0000 (0:00:00.707) 0:01:13.123 ********* 2026-03-17 00:56:38.409407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 00:56:38.409414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.409419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.409423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 00:56:38.409433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.409443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.409460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 00:56:38.409465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.409469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.409473 | orchestrator | 2026-03-17 00:56:38.409477 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-17 00:56:38.409481 | orchestrator | Tuesday 17 March 2026 00:51:59 +0000 (0:00:03.467) 0:01:16.591 ********* 2026-03-17 00:56:38.409487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-17 00:56:38.409494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.409510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.409514 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.409518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-17 00:56:38.409522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-17 00:56:38.409528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.409537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.409550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.409555 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.409558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.409562 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.409566 | orchestrator | 2026-03-17 00:56:38.409570 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-17 00:56:38.409574 | orchestrator | Tuesday 17 March 2026 00:52:00 +0000 (0:00:00.643) 0:01:17.235 ********* 2026-03-17 00:56:38.409578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-17 00:56:38.409582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-17 00:56:38.409586 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.409590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-17 00:56:38.409594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-17 00:56:38.409598 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.409601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-17 00:56:38.409605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-17 00:56:38.409613 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.409617 | orchestrator | 2026-03-17 00:56:38.409620 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-17 00:56:38.409624 | orchestrator | Tuesday 17 March 2026 00:52:01 +0000 (0:00:01.028) 0:01:18.263 ********* 2026-03-17 00:56:38.409628 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.409632 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.409635 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.409639 | orchestrator | 2026-03-17 00:56:38.409643 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-17 00:56:38.409649 | orchestrator | Tuesday 17 March 2026 00:52:02 +0000 (0:00:01.093) 0:01:19.357 ********* 2026-03-17 00:56:38.409653 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.409657 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.409660 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.409664 | orchestrator | 2026-03-17 00:56:38.409668 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-17 00:56:38.409671 | orchestrator | Tuesday 17 March 2026 00:52:04 +0000 (0:00:02.293) 0:01:21.650 ********* 2026-03-17 00:56:38.409675 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.409681 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.409687 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.409695 | orchestrator | 2026-03-17 00:56:38.409703 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-17 00:56:38.409710 | orchestrator | Tuesday 17 March 2026 00:52:05 +0000 (0:00:00.294) 0:01:21.944 ********* 2026-03-17 00:56:38.409716 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:56:38.409723 | orchestrator | 2026-03-17 00:56:38.409729 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-17 00:56:38.409735 | orchestrator | Tuesday 17 March 2026 00:52:05 +0000 (0:00:00.719) 0:01:22.664 ********* 2026-03-17 00:56:38.409758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-17 00:56:38.409766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-17 00:56:38.409773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-17 00:56:38.409784 | orchestrator | 2026-03-17 00:56:38.409790 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-17 00:56:38.409796 | orchestrator | Tuesday 17 March 2026 00:52:08 +0000 (0:00:02.220) 0:01:24.885 ********* 2026-03-17 00:56:38.409805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-17 00:56:38.409811 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.409817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-17 00:56:38.409823 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.409844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-17 00:56:38.409853 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.409860 | orchestrator | 2026-03-17 00:56:38.409867 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-17 00:56:38.409873 | orchestrator | Tuesday 17 March 2026 00:52:09 +0000 (0:00:01.276) 0:01:26.161 ********* 2026-03-17 00:56:38.409881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-17 00:56:38.409894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-17 00:56:38.409902 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.410007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-17 00:56:38.410773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-17 00:56:38.410929 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.410946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-17 00:56:38.410952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-17 00:56:38.410957 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.410961 | orchestrator | 2026-03-17 00:56:38.410968 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-17 00:56:38.410974 | orchestrator | Tuesday 17 March 2026 00:52:10 +0000 (0:00:01.514) 0:01:27.676 ********* 2026-03-17 00:56:38.410980 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.411263 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.411269 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.411274 | orchestrator | 2026-03-17 00:56:38.411278 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-17 00:56:38.411282 | orchestrator | Tuesday 17 March 2026 00:52:11 +0000 (0:00:00.548) 0:01:28.225 ********* 2026-03-17 00:56:38.411287 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.411291 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.411295 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.411300 | orchestrator | 2026-03-17 00:56:38.411304 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-17 00:56:38.411357 | orchestrator | Tuesday 17 March 2026 00:52:12 +0000 (0:00:01.018) 0:01:29.243 ********* 2026-03-17 00:56:38.411363 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:56:38.411368 | orchestrator | 2026-03-17 00:56:38.411372 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-17 00:56:38.411376 | orchestrator | Tuesday 17 March 2026 00:52:13 +0000 (0:00:00.629) 0:01:29.873 ********* 2026-03-17 00:56:38.411391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 00:56:38.411398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.411404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.411414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.411448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 00:56:38.411487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.411497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.411501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.411508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 00:56:38.411513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.411548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.411558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.411563 | orchestrator | 2026-03-17 00:56:38.411567 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-17 00:56:38.411572 | orchestrator | Tuesday 17 March 2026 00:52:16 +0000 (0:00:03.366) 0:01:33.239 ********* 2026-03-17 00:56:38.411576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-17 00:56:38.411583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-17 00:56:38.411589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.411630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.411644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.411652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.411658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.411665 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.411674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.411680 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.411686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-17 00:56:38.411738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.411746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.411751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.411756 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.411760 | orchestrator | 2026-03-17 00:56:38.411764 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-17 00:56:38.411768 | orchestrator | Tuesday 17 March 2026 00:52:17 +0000 (0:00:01.026) 0:01:34.265 ********* 2026-03-17 00:56:38.411774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-17 00:56:38.411780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-17 00:56:38.412100 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.412120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-17 00:56:38.412128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-17 00:56:38.412134 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.412141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-17 00:56:38.412159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-17 00:56:38.412166 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.412173 | orchestrator | 2026-03-17 00:56:38.412179 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-17 00:56:38.412183 | orchestrator | Tuesday 17 March 2026 00:52:18 +0000 (0:00:00.799) 0:01:35.064 ********* 2026-03-17 00:56:38.412187 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.412192 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.412196 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.412200 | orchestrator | 2026-03-17 00:56:38.412204 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-17 00:56:38.412209 | orchestrator | Tuesday 17 March 2026 00:52:19 +0000 (0:00:01.334) 0:01:36.399 ********* 2026-03-17 00:56:38.412213 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.412218 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.412224 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.412230 | orchestrator | 2026-03-17 00:56:38.412296 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-17 00:56:38.412306 | orchestrator | Tuesday 17 March 2026 00:52:21 +0000 (0:00:02.103) 0:01:38.503 ********* 2026-03-17 00:56:38.412312 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.412319 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.412325 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.412330 | orchestrator | 2026-03-17 00:56:38.412340 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-17 00:56:38.412346 | orchestrator | Tuesday 17 March 2026 00:52:22 +0000 (0:00:00.408) 0:01:38.911 ********* 2026-03-17 00:56:38.412352 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.412358 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.412364 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.412370 | orchestrator | 2026-03-17 00:56:38.412376 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-17 00:56:38.412383 | orchestrator | Tuesday 17 March 2026 00:52:22 +0000 (0:00:00.275) 0:01:39.187 ********* 2026-03-17 00:56:38.412387 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:56:38.412391 | orchestrator | 2026-03-17 00:56:38.412394 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-17 00:56:38.412398 | orchestrator | Tuesday 17 March 2026 00:52:23 +0000 (0:00:00.700) 0:01:39.888 ********* 2026-03-17 00:56:38.412403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 00:56:38.412409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 00:56:38.412424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.412429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.412471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.412477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.412481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.412546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 00:56:38.412624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 00:56:38.412631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.412675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.412681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.412685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.412689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.412702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 00:56:38.412706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 00:56:38.412711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.412735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.412740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.412744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.412752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.412756 | orchestrator | 2026-03-17 00:56:38.412760 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-17 00:56:38.412764 | orchestrator | Tuesday 17 March 2026 00:52:26 +0000 (0:00:03.547) 0:01:43.436 ********* 2026-03-17 00:56:38.412768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 00:56:38.412960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 00:56:38.413055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.413067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.413073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.413089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.413118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.413122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 00:56:38.413126 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.413180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 00:56:38.413186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.413195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.413271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.413280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.413284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.413288 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.413324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 00:56:38.413330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 00:56:38.413339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.413343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.413349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.413353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.413386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.413391 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.413395 | orchestrator | 2026-03-17 00:56:38.413399 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-17 00:56:38.413403 | orchestrator | Tuesday 17 March 2026 00:52:27 +0000 (0:00:00.801) 0:01:44.237 ********* 2026-03-17 00:56:38.413407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-17 00:56:38.413412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-17 00:56:38.413720 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.413728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-17 00:56:38.413732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-17 00:56:38.413736 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.413740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-17 00:56:38.413743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-17 00:56:38.413747 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.413751 | orchestrator | 2026-03-17 00:56:38.413755 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-17 00:56:38.413759 | orchestrator | Tuesday 17 March 2026 00:52:28 +0000 (0:00:01.070) 0:01:45.307 ********* 2026-03-17 00:56:38.413763 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.413766 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.413770 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.413774 | orchestrator | 2026-03-17 00:56:38.413778 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-17 00:56:38.413782 | orchestrator | Tuesday 17 March 2026 00:52:30 +0000 (0:00:01.821) 0:01:47.129 ********* 2026-03-17 00:56:38.413785 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.413789 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.413793 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.413797 | orchestrator | 2026-03-17 00:56:38.413800 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-17 00:56:38.413804 | orchestrator | Tuesday 17 March 2026 00:52:32 +0000 (0:00:01.705) 0:01:48.835 ********* 2026-03-17 00:56:38.413808 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.413812 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.413815 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.413819 | orchestrator | 2026-03-17 00:56:38.413823 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-17 00:56:38.413826 | orchestrator | Tuesday 17 March 2026 00:52:32 +0000 (0:00:00.435) 0:01:49.271 ********* 2026-03-17 00:56:38.413830 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:56:38.413834 | orchestrator | 2026-03-17 00:56:38.413840 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-17 00:56:38.413844 | orchestrator | Tuesday 17 March 2026 00:52:33 +0000 (0:00:00.732) 0:01:50.003 ********* 2026-03-17 00:56:38.413924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 00:56:38.413938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-17 00:56:38.413945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 00:56:38.414006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-17 00:56:38.414255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 00:56:38.414310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-17 00:56:38.414323 | orchestrator | 2026-03-17 00:56:38.414327 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-17 00:56:38.414332 | orchestrator | Tuesday 17 March 2026 00:52:37 +0000 (0:00:04.353) 0:01:54.356 ********* 2026-03-17 00:56:38.414340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-17 00:56:38.414390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-17 00:56:38.414418 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.414426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-17 00:56:38.414486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-17 00:56:38.414506 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.414511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-17 00:56:38.414548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-17 00:56:38.414558 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.414562 | orchestrator | 2026-03-17 00:56:38.414566 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-17 00:56:38.414607 | orchestrator | Tuesday 17 March 2026 00:52:40 +0000 (0:00:02.734) 0:01:57.091 ********* 2026-03-17 00:56:38.414612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-17 00:56:38.414616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-17 00:56:38.414620 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.414624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-17 00:56:38.414628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-17 00:56:38.414632 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.414639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-17 00:56:38.414643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-17 00:56:38.414651 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.414654 | orchestrator | 2026-03-17 00:56:38.414658 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-17 00:56:38.414662 | orchestrator | Tuesday 17 March 2026 00:52:43 +0000 (0:00:03.327) 0:02:00.419 ********* 2026-03-17 00:56:38.414666 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.414670 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.414674 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.414677 | orchestrator | 2026-03-17 00:56:38.414681 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-17 00:56:38.414685 | orchestrator | Tuesday 17 March 2026 00:52:44 +0000 (0:00:01.111) 0:02:01.530 ********* 2026-03-17 00:56:38.414689 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.414693 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.414697 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.414700 | orchestrator | 2026-03-17 00:56:38.414753 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-17 00:56:38.414761 | orchestrator | Tuesday 17 March 2026 00:52:46 +0000 (0:00:02.044) 0:02:03.574 ********* 2026-03-17 00:56:38.414767 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.414773 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.414778 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.414783 | orchestrator | 2026-03-17 00:56:38.414788 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-17 00:56:38.414797 | orchestrator | Tuesday 17 March 2026 00:52:47 +0000 (0:00:00.564) 0:02:04.139 ********* 2026-03-17 00:56:38.414805 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:56:38.414811 | orchestrator | 2026-03-17 00:56:38.414818 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-17 00:56:38.414824 | orchestrator | Tuesday 17 March 2026 00:52:48 +0000 (0:00:00.916) 0:02:05.055 ********* 2026-03-17 00:56:38.414830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 00:56:38.414838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 00:56:38.414849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 00:56:38.414861 | orchestrator | 2026-03-17 00:56:38.414866 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-17 00:56:38.414871 | orchestrator | Tuesday 17 March 2026 00:52:52 +0000 (0:00:04.244) 0:02:09.299 ********* 2026-03-17 00:56:38.414879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-17 00:56:38.414884 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.415010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-17 00:56:38.415044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-17 00:56:38.415051 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.415056 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.415063 | orchestrator | 2026-03-17 00:56:38.415069 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-17 00:56:38.415075 | orchestrator | Tuesday 17 March 2026 00:52:53 +0000 (0:00:00.522) 0:02:09.822 ********* 2026-03-17 00:56:38.415081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-17 00:56:38.415086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-17 00:56:38.415090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-17 00:56:38.415260 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.415269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-17 00:56:38.415273 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.415277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-17 00:56:38.415281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-17 00:56:38.415284 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.415288 | orchestrator | 2026-03-17 00:56:38.415292 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-17 00:56:38.415296 | orchestrator | Tuesday 17 March 2026 00:52:53 +0000 (0:00:00.619) 0:02:10.441 ********* 2026-03-17 00:56:38.415300 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.415360 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.415369 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.415373 | orchestrator | 2026-03-17 00:56:38.415376 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-17 00:56:38.415380 | orchestrator | Tuesday 17 March 2026 00:52:55 +0000 (0:00:01.354) 0:02:11.795 ********* 2026-03-17 00:56:38.415384 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.415387 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.415391 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.415395 | orchestrator | 2026-03-17 00:56:38.415399 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-17 00:56:38.415402 | orchestrator | Tuesday 17 March 2026 00:52:57 +0000 (0:00:02.285) 0:02:14.082 ********* 2026-03-17 00:56:38.415406 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.415410 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.415413 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.415417 | orchestrator | 2026-03-17 00:56:38.415421 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-17 00:56:38.415424 | orchestrator | Tuesday 17 March 2026 00:52:58 +0000 (0:00:00.895) 0:02:14.978 ********* 2026-03-17 00:56:38.415428 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:56:38.415432 | orchestrator | 2026-03-17 00:56:38.415435 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-17 00:56:38.415439 | orchestrator | Tuesday 17 March 2026 00:52:59 +0000 (0:00:01.657) 0:02:16.635 ********* 2026-03-17 00:56:38.415486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 00:56:38.415503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 00:56:38.415538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 00:56:38.415547 | orchestrator | 2026-03-17 00:56:38.415551 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-17 00:56:38.415555 | orchestrator | Tuesday 17 March 2026 00:53:05 +0000 (0:00:05.512) 0:02:22.148 ********* 2026-03-17 00:56:38.415589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-17 00:56:38.415595 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.415600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-17 00:56:38.415610 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.415691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-17 00:56:38.415703 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.415707 | orchestrator | 2026-03-17 00:56:38.415711 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-17 00:56:38.415715 | orchestrator | Tuesday 17 March 2026 00:53:06 +0000 (0:00:01.142) 0:02:23.291 ********* 2026-03-17 00:56:38.415720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-17 00:56:38.415726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-17 00:56:38.415732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-17 00:56:38.415737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-17 00:56:38.415742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-17 00:56:38.415746 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.415749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-17 00:56:38.415756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-17 00:56:38.415760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-17 00:56:38.415764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-17 00:56:38.415768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-17 00:56:38.415813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-17 00:56:38.415823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-17 00:56:38.415827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-17 00:56:38.415831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-17 00:56:38.415835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-17 00:56:38.415839 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.415842 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.415846 | orchestrator | 2026-03-17 00:56:38.416927 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-17 00:56:38.416945 | orchestrator | Tuesday 17 March 2026 00:53:07 +0000 (0:00:01.122) 0:02:24.413 ********* 2026-03-17 00:56:38.416949 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.416953 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.416957 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.416961 | orchestrator | 2026-03-17 00:56:38.416965 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-17 00:56:38.416970 | orchestrator | Tuesday 17 March 2026 00:53:09 +0000 (0:00:01.669) 0:02:26.083 ********* 2026-03-17 00:56:38.416974 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.416977 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.416982 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.416985 | orchestrator | 2026-03-17 00:56:38.416989 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-17 00:56:38.416993 | orchestrator | Tuesday 17 March 2026 00:53:11 +0000 (0:00:01.682) 0:02:27.765 ********* 2026-03-17 00:56:38.416997 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.417001 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.417004 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.417008 | orchestrator | 2026-03-17 00:56:38.417034 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-17 00:56:38.417038 | orchestrator | Tuesday 17 March 2026 00:53:11 +0000 (0:00:00.325) 0:02:28.091 ********* 2026-03-17 00:56:38.417042 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.417045 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.417049 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.417053 | orchestrator | 2026-03-17 00:56:38.417057 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-17 00:56:38.417060 | orchestrator | Tuesday 17 March 2026 00:53:11 +0000 (0:00:00.412) 0:02:28.504 ********* 2026-03-17 00:56:38.417065 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:56:38.417068 | orchestrator | 2026-03-17 00:56:38.417074 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-17 00:56:38.417079 | orchestrator | Tuesday 17 March 2026 00:53:12 +0000 (0:00:00.877) 0:02:29.381 ********* 2026-03-17 00:56:38.417084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 00:56:38.417161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 00:56:38.417169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 00:56:38.417173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 00:56:38.417177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 00:56:38.417184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 00:56:38.417213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 00:56:38.417218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 00:56:38.417222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 00:56:38.417226 | orchestrator | 2026-03-17 00:56:38.417230 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-17 00:56:38.417234 | orchestrator | Tuesday 17 March 2026 00:53:15 +0000 (0:00:03.341) 0:02:32.722 ********* 2026-03-17 00:56:38.417240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-17 00:56:38.417247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 00:56:38.417251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 00:56:38.417255 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.417284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-17 00:56:38.417289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 00:56:38.417294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 00:56:38.417298 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.417303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-17 00:56:38.417310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 00:56:38.417335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 00:56:38.417340 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.417344 | orchestrator | 2026-03-17 00:56:38.417347 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-17 00:56:38.417351 | orchestrator | Tuesday 17 March 2026 00:53:16 +0000 (0:00:00.617) 0:02:33.340 ********* 2026-03-17 00:56:38.417356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-17 00:56:38.417360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-17 00:56:38.417365 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.417369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-17 00:56:38.417373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-17 00:56:38.417377 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.417381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-17 00:56:38.417385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-17 00:56:38.417393 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.417397 | orchestrator | 2026-03-17 00:56:38.417401 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-17 00:56:38.417404 | orchestrator | Tuesday 17 March 2026 00:53:17 +0000 (0:00:00.828) 0:02:34.168 ********* 2026-03-17 00:56:38.417408 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.417412 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.417416 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.417419 | orchestrator | 2026-03-17 00:56:38.417425 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-17 00:56:38.417429 | orchestrator | Tuesday 17 March 2026 00:53:18 +0000 (0:00:01.226) 0:02:35.395 ********* 2026-03-17 00:56:38.417433 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.417436 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.417440 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.417444 | orchestrator | 2026-03-17 00:56:38.417447 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-17 00:56:38.417451 | orchestrator | Tuesday 17 March 2026 00:53:21 +0000 (0:00:02.397) 0:02:37.793 ********* 2026-03-17 00:56:38.417455 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.417459 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.417462 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.417466 | orchestrator | 2026-03-17 00:56:38.417470 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-17 00:56:38.417473 | orchestrator | Tuesday 17 March 2026 00:53:21 +0000 (0:00:00.530) 0:02:38.323 ********* 2026-03-17 00:56:38.417477 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:56:38.417481 | orchestrator | 2026-03-17 00:56:38.417485 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-17 00:56:38.417489 | orchestrator | Tuesday 17 March 2026 00:53:22 +0000 (0:00:01.020) 0:02:39.344 ********* 2026-03-17 00:56:38.417522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 00:56:38.417528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.417533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 00:56:38.417542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.417546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 00:56:38.417577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.417583 | orchestrator | 2026-03-17 00:56:38.417587 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-17 00:56:38.417591 | orchestrator | Tuesday 17 March 2026 00:53:26 +0000 (0:00:03.647) 0:02:42.991 ********* 2026-03-17 00:56:38.417595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-17 00:56:38.417603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.417607 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.417613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-17 00:56:38.417645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-17 00:56:38.417651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.417655 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.417666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.417670 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.417674 | orchestrator | 2026-03-17 00:56:38.417678 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-17 00:56:38.417681 | orchestrator | Tuesday 17 March 2026 00:53:27 +0000 (0:00:01.269) 0:02:44.261 ********* 2026-03-17 00:56:38.417686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-17 00:56:38.417690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-17 00:56:38.417694 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.417698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-17 00:56:38.417701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-17 00:56:38.417705 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.417709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-17 00:56:38.417713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-17 00:56:38.417717 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.417721 | orchestrator | 2026-03-17 00:56:38.417736 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-17 00:56:38.417740 | orchestrator | Tuesday 17 March 2026 00:53:28 +0000 (0:00:01.018) 0:02:45.280 ********* 2026-03-17 00:56:38.417744 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.417748 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.417751 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.417755 | orchestrator | 2026-03-17 00:56:38.417759 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-17 00:56:38.417763 | orchestrator | Tuesday 17 March 2026 00:53:29 +0000 (0:00:01.426) 0:02:46.706 ********* 2026-03-17 00:56:38.417766 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.417770 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.417774 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.417778 | orchestrator | 2026-03-17 00:56:38.417781 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-17 00:56:38.417785 | orchestrator | Tuesday 17 March 2026 00:53:32 +0000 (0:00:02.272) 0:02:48.979 ********* 2026-03-17 00:56:38.417789 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:56:38.417793 | orchestrator | 2026-03-17 00:56:38.417797 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-17 00:56:38.417800 | orchestrator | Tuesday 17 March 2026 00:53:33 +0000 (0:00:01.083) 0:02:50.062 ********* 2026-03-17 00:56:38.417859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-17 00:56:38.417866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.417871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.417875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.417882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-17 00:56:38.417904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-17 00:56:38.417913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.417917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.417921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.417927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.417931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.417953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.417960 | orchestrator | 2026-03-17 00:56:38.417964 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-17 00:56:38.417968 | orchestrator | Tuesday 17 March 2026 00:53:36 +0000 (0:00:03.651) 0:02:53.714 ********* 2026-03-17 00:56:38.417972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-17 00:56:38.417976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.417980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.417986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.417990 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.417994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-17 00:56:38.418102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.418110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.418114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.418118 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.418122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-17 00:56:38.418129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.418133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.418172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.418178 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.418182 | orchestrator | 2026-03-17 00:56:38.418186 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-17 00:56:38.418190 | orchestrator | Tuesday 17 March 2026 00:53:37 +0000 (0:00:00.658) 0:02:54.373 ********* 2026-03-17 00:56:38.418194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-17 00:56:38.418198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-17 00:56:38.418202 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.418206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-17 00:56:38.418210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-17 00:56:38.418214 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.418217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-17 00:56:38.418221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-17 00:56:38.418225 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.418229 | orchestrator | 2026-03-17 00:56:38.418246 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-17 00:56:38.418250 | orchestrator | Tuesday 17 March 2026 00:53:38 +0000 (0:00:01.251) 0:02:55.625 ********* 2026-03-17 00:56:38.418253 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.418257 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.418261 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.418265 | orchestrator | 2026-03-17 00:56:38.418269 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-17 00:56:38.418273 | orchestrator | Tuesday 17 March 2026 00:53:40 +0000 (0:00:01.264) 0:02:56.889 ********* 2026-03-17 00:56:38.418276 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.418280 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.418284 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.418291 | orchestrator | 2026-03-17 00:56:38.418295 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-17 00:56:38.418374 | orchestrator | Tuesday 17 March 2026 00:53:42 +0000 (0:00:02.035) 0:02:58.925 ********* 2026-03-17 00:56:38.418399 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:56:38.418405 | orchestrator | 2026-03-17 00:56:38.418416 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-17 00:56:38.418422 | orchestrator | Tuesday 17 March 2026 00:53:43 +0000 (0:00:01.263) 0:03:00.189 ********* 2026-03-17 00:56:38.418428 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-17 00:56:38.418433 | orchestrator | 2026-03-17 00:56:38.418439 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-17 00:56:38.418444 | orchestrator | Tuesday 17 March 2026 00:53:46 +0000 (0:00:02.844) 0:03:03.034 ********* 2026-03-17 00:56:38.418534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 00:56:38.418547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-17 00:56:38.418551 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.418560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 00:56:38.418569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-17 00:56:38.418573 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.418600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 00:56:38.418605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-17 00:56:38.418614 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.418618 | orchestrator | 2026-03-17 00:56:38.418622 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-17 00:56:38.418626 | orchestrator | Tuesday 17 March 2026 00:53:48 +0000 (0:00:02.111) 0:03:05.145 ********* 2026-03-17 00:56:38.418666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 00:56:38.418672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-17 00:56:38.418676 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.418683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 00:56:38.418692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-17 00:56:38.418698 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.418735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 00:56:38.418745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-17 00:56:38.418756 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.418762 | orchestrator | 2026-03-17 00:56:38.418768 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-17 00:56:38.418774 | orchestrator | Tuesday 17 March 2026 00:53:50 +0000 (0:00:02.467) 0:03:07.613 ********* 2026-03-17 00:56:38.418780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-17 00:56:38.418790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-17 00:56:38.418797 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.418802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-17 00:56:38.418845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-17 00:56:38.418853 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.418859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-17 00:56:38.418865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-17 00:56:38.418876 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.418883 | orchestrator | 2026-03-17 00:56:38.418889 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-17 00:56:38.418894 | orchestrator | Tuesday 17 March 2026 00:53:53 +0000 (0:00:02.699) 0:03:10.313 ********* 2026-03-17 00:56:38.418900 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.418905 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.418910 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.418916 | orchestrator | 2026-03-17 00:56:38.418922 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-17 00:56:38.418928 | orchestrator | Tuesday 17 March 2026 00:53:55 +0000 (0:00:01.722) 0:03:12.035 ********* 2026-03-17 00:56:38.418933 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.418939 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.418947 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.418951 | orchestrator | 2026-03-17 00:56:38.418955 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-17 00:56:38.418959 | orchestrator | Tuesday 17 March 2026 00:53:56 +0000 (0:00:01.451) 0:03:13.486 ********* 2026-03-17 00:56:38.418962 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.418966 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.418970 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.418973 | orchestrator | 2026-03-17 00:56:38.418977 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-17 00:56:38.418981 | orchestrator | Tuesday 17 March 2026 00:53:57 +0000 (0:00:00.301) 0:03:13.788 ********* 2026-03-17 00:56:38.418984 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:56:38.418988 | orchestrator | 2026-03-17 00:56:38.418992 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-17 00:56:38.418999 | orchestrator | Tuesday 17 March 2026 00:53:58 +0000 (0:00:01.305) 0:03:15.093 ********* 2026-03-17 00:56:38.419003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-17 00:56:38.419097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-17 00:56:38.419106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-17 00:56:38.419114 | orchestrator | 2026-03-17 00:56:38.419118 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-17 00:56:38.419122 | orchestrator | Tuesday 17 March 2026 00:53:59 +0000 (0:00:01.427) 0:03:16.521 ********* 2026-03-17 00:56:38.419126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-17 00:56:38.419130 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.419137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-17 00:56:38.419141 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.419145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-17 00:56:38.419149 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.419153 | orchestrator | 2026-03-17 00:56:38.419157 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-17 00:56:38.419161 | orchestrator | Tuesday 17 March 2026 00:54:00 +0000 (0:00:00.381) 0:03:16.902 ********* 2026-03-17 00:56:38.419180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-17 00:56:38.419184 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.419230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-17 00:56:38.419236 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.419240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-17 00:56:38.419244 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.419248 | orchestrator | 2026-03-17 00:56:38.419252 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-17 00:56:38.419256 | orchestrator | Tuesday 17 March 2026 00:54:01 +0000 (0:00:00.909) 0:03:17.812 ********* 2026-03-17 00:56:38.419259 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.419263 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.419267 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.419270 | orchestrator | 2026-03-17 00:56:38.419274 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-17 00:56:38.419278 | orchestrator | Tuesday 17 March 2026 00:54:01 +0000 (0:00:00.458) 0:03:18.271 ********* 2026-03-17 00:56:38.419282 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.419285 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.419289 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.419293 | orchestrator | 2026-03-17 00:56:38.419297 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-17 00:56:38.419301 | orchestrator | Tuesday 17 March 2026 00:54:02 +0000 (0:00:01.232) 0:03:19.504 ********* 2026-03-17 00:56:38.419304 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.419308 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.419312 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.419316 | orchestrator | 2026-03-17 00:56:38.419319 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-17 00:56:38.419323 | orchestrator | Tuesday 17 March 2026 00:54:03 +0000 (0:00:00.313) 0:03:19.817 ********* 2026-03-17 00:56:38.419327 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:56:38.419330 | orchestrator | 2026-03-17 00:56:38.419334 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-17 00:56:38.419338 | orchestrator | Tuesday 17 March 2026 00:54:04 +0000 (0:00:01.379) 0:03:21.196 ********* 2026-03-17 00:56:38.419345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 00:56:38.419350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.419386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.419393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.419397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 00:56:38.419401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-17 00:56:38.419408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.419434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.419440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:56:38.419445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.419449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:56:38.419453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.419462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.419471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-17 00:56:38.419494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 00:56:38.419499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.419503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.419507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:56:38.419521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-17 00:56:38.419529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:56:38.419533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:56:38.419557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 00:56:38.419562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.419566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.419572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.419582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 00:56:38.419615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-17 00:56:38.419622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.419626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.419630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-17 00:56:38.419640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-17 00:56:38.419645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.419678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:56:38.419684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-17 00:56:38.419688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.419693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.419703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-17 00:56:38.419708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:56:38.419731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-17 00:56:38.419736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:56:38.419741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.419755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 00:56:38.419765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.419769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-17 00:56:38.419802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:56:38.419808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.419813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-17 00:56:38.419821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-17 00:56:38.419825 | orchestrator | 2026-03-17 00:56:38.419829 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-17 00:56:38.419834 | orchestrator | Tuesday 17 March 2026 00:54:08 +0000 (0:00:03.840) 0:03:25.037 ********* 2026-03-17 00:56:38.419840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 00:56:38.419872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.419878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.419883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.419890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-17 00:56:38.419904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.419910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:56:38.419951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:56:38.419960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.419966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 00:56:38.419986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.419996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 00:56:38.420003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-17 00:56:38.420171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.420186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:56:38.420193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 00:56:38.420207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.420219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.420235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.420290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.420298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-17 00:56:38.420313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.420324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-17 00:56:38.420331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.420378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-17 00:56:38.420386 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.420393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.420406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-17 00:56:38.420413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:56:38.420424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:56:38.420431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.420479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.420488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:56:38.420495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 00:56:38.420507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.420514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:56:38.420588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-17 00:56:38.420623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:56:38.420683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.420692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.420705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 00:56:38.420713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-17 00:56:38.420736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-17 00:56:38.420741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.420745 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.420782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-17 00:56:38.420794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-17 00:56:38.420798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.420802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-17 00:56:38.420810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-17 00:56:38.420814 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.420817 | orchestrator | 2026-03-17 00:56:38.420822 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-17 00:56:38.420826 | orchestrator | Tuesday 17 March 2026 00:54:09 +0000 (0:00:01.398) 0:03:26.435 ********* 2026-03-17 00:56:38.420830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-17 00:56:38.420835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-17 00:56:38.420840 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.420856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-17 00:56:38.420865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-17 00:56:38.420869 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.420873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-17 00:56:38.420876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-17 00:56:38.420880 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.420884 | orchestrator | 2026-03-17 00:56:38.420888 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-17 00:56:38.420891 | orchestrator | Tuesday 17 March 2026 00:54:11 +0000 (0:00:01.807) 0:03:28.243 ********* 2026-03-17 00:56:38.420895 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.420899 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.420903 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.420906 | orchestrator | 2026-03-17 00:56:38.420910 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-17 00:56:38.420914 | orchestrator | Tuesday 17 March 2026 00:54:12 +0000 (0:00:01.241) 0:03:29.484 ********* 2026-03-17 00:56:38.420918 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.420922 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.420925 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.420929 | orchestrator | 2026-03-17 00:56:38.420933 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-17 00:56:38.420937 | orchestrator | Tuesday 17 March 2026 00:54:14 +0000 (0:00:01.902) 0:03:31.387 ********* 2026-03-17 00:56:38.420940 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:56:38.420944 | orchestrator | 2026-03-17 00:56:38.420948 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-17 00:56:38.420951 | orchestrator | Tuesday 17 March 2026 00:54:15 +0000 (0:00:01.195) 0:03:32.583 ********* 2026-03-17 00:56:38.420956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 00:56:38.420963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 00:56:38.420983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 00:56:38.420988 | orchestrator | 2026-03-17 00:56:38.420991 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-17 00:56:38.420995 | orchestrator | Tuesday 17 March 2026 00:54:18 +0000 (0:00:03.066) 0:03:35.649 ********* 2026-03-17 00:56:38.420999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-17 00:56:38.421003 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.421007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-17 00:56:38.421030 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.421040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-17 00:56:38.421052 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.421056 | orchestrator | 2026-03-17 00:56:38.421060 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-17 00:56:38.421064 | orchestrator | Tuesday 17 March 2026 00:54:19 +0000 (0:00:00.447) 0:03:36.097 ********* 2026-03-17 00:56:38.421068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-17 00:56:38.421072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-17 00:56:38.421086 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.421105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-17 00:56:38.421109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-17 00:56:38.421113 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.421117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-17 00:56:38.421121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-17 00:56:38.421125 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.421128 | orchestrator | 2026-03-17 00:56:38.421132 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-17 00:56:38.421136 | orchestrator | Tuesday 17 March 2026 00:54:20 +0000 (0:00:00.728) 0:03:36.825 ********* 2026-03-17 00:56:38.421140 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.421143 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.421147 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.421151 | orchestrator | 2026-03-17 00:56:38.421154 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-17 00:56:38.421158 | orchestrator | Tuesday 17 March 2026 00:54:21 +0000 (0:00:01.522) 0:03:38.347 ********* 2026-03-17 00:56:38.421162 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.421165 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.421169 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.421173 | orchestrator | 2026-03-17 00:56:38.421177 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-17 00:56:38.421180 | orchestrator | Tuesday 17 March 2026 00:54:23 +0000 (0:00:01.591) 0:03:39.939 ********* 2026-03-17 00:56:38.421184 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:56:38.421188 | orchestrator | 2026-03-17 00:56:38.421192 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-17 00:56:38.421196 | orchestrator | Tuesday 17 March 2026 00:54:24 +0000 (0:00:01.314) 0:03:41.254 ********* 2026-03-17 00:56:38.421203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 00:56:38.421212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.421234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.421241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 00:56:38.421248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.421261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.421271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 00:56:38.421294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.421301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.421307 | orchestrator | 2026-03-17 00:56:38.421313 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-17 00:56:38.421319 | orchestrator | Tuesday 17 March 2026 00:54:28 +0000 (0:00:03.903) 0:03:45.157 ********* 2026-03-17 00:56:38.421325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-17 00:56:38.421344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.421351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.421357 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.421382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-17 00:56:38.421390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.421395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.421403 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.421409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-17 00:56:38.421414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.421431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.421436 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.421440 | orchestrator | 2026-03-17 00:56:38.421445 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-17 00:56:38.421449 | orchestrator | Tuesday 17 March 2026 00:54:29 +0000 (0:00:00.951) 0:03:46.109 ********* 2026-03-17 00:56:38.421454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-17 00:56:38.421459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-17 00:56:38.421464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-17 00:56:38.421472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-17 00:56:38.421476 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.421481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-17 00:56:38.421485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-17 00:56:38.421489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-17 00:56:38.421494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-17 00:56:38.421500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-17 00:56:38.421505 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.421509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-17 00:56:38.421513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-17 00:56:38.421518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-17 00:56:38.421523 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.421527 | orchestrator | 2026-03-17 00:56:38.421531 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-17 00:56:38.421536 | orchestrator | Tuesday 17 March 2026 00:54:30 +0000 (0:00:00.797) 0:03:46.907 ********* 2026-03-17 00:56:38.421540 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.421544 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.421549 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.421553 | orchestrator | 2026-03-17 00:56:38.421558 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-17 00:56:38.421564 | orchestrator | Tuesday 17 March 2026 00:54:31 +0000 (0:00:01.347) 0:03:48.254 ********* 2026-03-17 00:56:38.421571 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.421576 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.421580 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.421584 | orchestrator | 2026-03-17 00:56:38.421600 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-17 00:56:38.421605 | orchestrator | Tuesday 17 March 2026 00:54:33 +0000 (0:00:01.884) 0:03:50.139 ********* 2026-03-17 00:56:38.421609 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:56:38.421612 | orchestrator | 2026-03-17 00:56:38.421616 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-17 00:56:38.421620 | orchestrator | Tuesday 17 March 2026 00:54:34 +0000 (0:00:01.345) 0:03:51.484 ********* 2026-03-17 00:56:38.421628 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-2, testbed-node-1 => (item=nova-novncproxy) 2026-03-17 00:56:38.421632 | orchestrator | 2026-03-17 00:56:38.421636 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-17 00:56:38.421647 | orchestrator | Tuesday 17 March 2026 00:54:35 +0000 (0:00:00.746) 0:03:52.230 ********* 2026-03-17 00:56:38.421651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-17 00:56:38.421656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-17 00:56:38.421660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-17 00:56:38.421664 | orchestrator | 2026-03-17 00:56:38.421668 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-17 00:56:38.421672 | orchestrator | Tuesday 17 March 2026 00:54:39 +0000 (0:00:04.040) 0:03:56.271 ********* 2026-03-17 00:56:38.421679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-17 00:56:38.421683 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.421688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-17 00:56:38.421691 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.421695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-17 00:56:38.421699 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.421709 | orchestrator | 2026-03-17 00:56:38.421724 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-17 00:56:38.421728 | orchestrator | Tuesday 17 March 2026 00:54:40 +0000 (0:00:00.827) 0:03:57.099 ********* 2026-03-17 00:56:38.421732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-17 00:56:38.421736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-17 00:56:38.421742 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.421745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-17 00:56:38.421754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-17 00:56:38.421758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-17 00:56:38.421762 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.421766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-17 00:56:38.421770 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.421773 | orchestrator | 2026-03-17 00:56:38.421777 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-17 00:56:38.421781 | orchestrator | Tuesday 17 March 2026 00:54:41 +0000 (0:00:01.249) 0:03:58.348 ********* 2026-03-17 00:56:38.421785 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.421789 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.421792 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.421796 | orchestrator | 2026-03-17 00:56:38.421800 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-17 00:56:38.421804 | orchestrator | Tuesday 17 March 2026 00:54:43 +0000 (0:00:02.095) 0:04:00.443 ********* 2026-03-17 00:56:38.421807 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.421811 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.421815 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.421818 | orchestrator | 2026-03-17 00:56:38.421822 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-17 00:56:38.421826 | orchestrator | Tuesday 17 March 2026 00:54:46 +0000 (0:00:03.124) 0:04:03.568 ********* 2026-03-17 00:56:38.421830 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-17 00:56:38.421834 | orchestrator | 2026-03-17 00:56:38.421838 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-17 00:56:38.421844 | orchestrator | Tuesday 17 March 2026 00:54:48 +0000 (0:00:01.348) 0:04:04.917 ********* 2026-03-17 00:56:38.421848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-17 00:56:38.421856 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.421860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-17 00:56:38.421864 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.421880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-17 00:56:38.421885 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.421888 | orchestrator | 2026-03-17 00:56:38.421892 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-17 00:56:38.421896 | orchestrator | Tuesday 17 March 2026 00:54:49 +0000 (0:00:01.101) 0:04:06.019 ********* 2026-03-17 00:56:38.421900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-17 00:56:38.421904 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.421907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-17 00:56:38.421912 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.421915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-17 00:56:38.421919 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.421923 | orchestrator | 2026-03-17 00:56:38.421927 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-17 00:56:38.421930 | orchestrator | Tuesday 17 March 2026 00:54:50 +0000 (0:00:01.188) 0:04:07.207 ********* 2026-03-17 00:56:38.421934 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.421938 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.421945 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.421949 | orchestrator | 2026-03-17 00:56:38.421953 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-17 00:56:38.421959 | orchestrator | Tuesday 17 March 2026 00:54:52 +0000 (0:00:01.555) 0:04:08.763 ********* 2026-03-17 00:56:38.421963 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:38.421967 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:38.421970 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:38.421974 | orchestrator | 2026-03-17 00:56:38.421978 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-17 00:56:38.421982 | orchestrator | Tuesday 17 March 2026 00:54:54 +0000 (0:00:02.074) 0:04:10.837 ********* 2026-03-17 00:56:38.421985 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:38.421989 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:38.421993 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:38.421996 | orchestrator | 2026-03-17 00:56:38.422000 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-17 00:56:38.422004 | orchestrator | Tuesday 17 March 2026 00:54:56 +0000 (0:00:02.517) 0:04:13.355 ********* 2026-03-17 00:56:38.422007 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-17 00:56:38.422089 | orchestrator | 2026-03-17 00:56:38.422096 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-17 00:56:38.422099 | orchestrator | Tuesday 17 March 2026 00:54:57 +0000 (0:00:00.749) 0:04:14.105 ********* 2026-03-17 00:56:38.422118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-17 00:56:38.422123 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.422126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-17 00:56:38.422130 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.422134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-17 00:56:38.422138 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.422142 | orchestrator | 2026-03-17 00:56:38.422146 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-17 00:56:38.422149 | orchestrator | Tuesday 17 March 2026 00:54:58 +0000 (0:00:01.120) 0:04:15.225 ********* 2026-03-17 00:56:38.422153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-17 00:56:38.422162 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.422165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-17 00:56:38.422169 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.422176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-17 00:56:38.422180 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.422184 | orchestrator | 2026-03-17 00:56:38.422187 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-17 00:56:38.422191 | orchestrator | Tuesday 17 March 2026 00:54:59 +0000 (0:00:01.165) 0:04:16.391 ********* 2026-03-17 00:56:38.422195 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.422199 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.422202 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.422206 | orchestrator | 2026-03-17 00:56:38.422210 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-17 00:56:38.422214 | orchestrator | Tuesday 17 March 2026 00:55:01 +0000 (0:00:01.369) 0:04:17.760 ********* 2026-03-17 00:56:38.422217 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:38.422221 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:38.422225 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:38.422228 | orchestrator | 2026-03-17 00:56:38.422232 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-17 00:56:38.422236 | orchestrator | Tuesday 17 March 2026 00:55:03 +0000 (0:00:02.121) 0:04:19.881 ********* 2026-03-17 00:56:38.422240 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:38.422243 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:38.422247 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:38.422251 | orchestrator | 2026-03-17 00:56:38.422254 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-17 00:56:38.422258 | orchestrator | Tuesday 17 March 2026 00:55:05 +0000 (0:00:02.809) 0:04:22.691 ********* 2026-03-17 00:56:38.422274 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:56:38.422278 | orchestrator | 2026-03-17 00:56:38.422282 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-17 00:56:38.422286 | orchestrator | Tuesday 17 March 2026 00:55:07 +0000 (0:00:01.348) 0:04:24.039 ********* 2026-03-17 00:56:38.422290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 00:56:38.422300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 00:56:38.422305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 00:56:38.422312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 00:56:38.422316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.422332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 00:56:38.422337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 00:56:38.422344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 00:56:38.422348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 00:56:38.422355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 00:56:38.422359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 00:56:38.422375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.422379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 00:56:38.422386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 00:56:38.422390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.422394 | orchestrator | 2026-03-17 00:56:38.422398 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-17 00:56:38.422402 | orchestrator | Tuesday 17 March 2026 00:55:10 +0000 (0:00:02.997) 0:04:27.037 ********* 2026-03-17 00:56:38.422409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 00:56:38.422413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 00:56:38.422429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 00:56:38.422437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 00:56:38.422441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.422445 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.422449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 00:56:38.422456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 00:56:38.422460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 00:56:38.422475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 00:56:38.422483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.422487 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.422491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 00:56:38.422495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 00:56:38.422501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 00:56:38.422505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 00:56:38.422521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 00:56:38.422528 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.422532 | orchestrator | 2026-03-17 00:56:38.422536 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-17 00:56:38.422540 | orchestrator | Tuesday 17 March 2026 00:55:11 +0000 (0:00:00.745) 0:04:27.783 ********* 2026-03-17 00:56:38.422544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-17 00:56:38.422548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-17 00:56:38.422552 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.422555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-17 00:56:38.422559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-17 00:56:38.422563 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.422567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-17 00:56:38.422570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-17 00:56:38.422574 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.422578 | orchestrator | 2026-03-17 00:56:38.422582 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-17 00:56:38.422585 | orchestrator | Tuesday 17 March 2026 00:55:12 +0000 (0:00:01.467) 0:04:29.250 ********* 2026-03-17 00:56:38.422589 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.422593 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.422596 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.422600 | orchestrator | 2026-03-17 00:56:38.422604 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-17 00:56:38.422608 | orchestrator | Tuesday 17 March 2026 00:55:13 +0000 (0:00:01.437) 0:04:30.688 ********* 2026-03-17 00:56:38.422611 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.422615 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.422619 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.422622 | orchestrator | 2026-03-17 00:56:38.422626 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-17 00:56:38.422630 | orchestrator | Tuesday 17 March 2026 00:55:15 +0000 (0:00:01.947) 0:04:32.635 ********* 2026-03-17 00:56:38.422634 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:56:38.422637 | orchestrator | 2026-03-17 00:56:38.422641 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-17 00:56:38.422645 | orchestrator | Tuesday 17 March 2026 00:55:17 +0000 (0:00:01.339) 0:04:33.975 ********* 2026-03-17 00:56:38.422652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-17 00:56:38.422672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-17 00:56:38.422677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-17 00:56:38.422682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-17 00:56:38.422689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-17 00:56:38.422709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-17 00:56:38.422713 | orchestrator | 2026-03-17 00:56:38.422717 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-17 00:56:38.422721 | orchestrator | Tuesday 17 March 2026 00:55:22 +0000 (0:00:05.212) 0:04:39.187 ********* 2026-03-17 00:56:38.422725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-17 00:56:38.422729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-17 00:56:38.422733 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.422741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-17 00:56:38.422760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-17 00:56:38.422765 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.422769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-17 00:56:38.422773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-17 00:56:38.422777 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.422781 | orchestrator | 2026-03-17 00:56:38.422784 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-17 00:56:38.422788 | orchestrator | Tuesday 17 March 2026 00:55:23 +0000 (0:00:00.564) 0:04:39.752 ********* 2026-03-17 00:56:38.422792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-17 00:56:38.422802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-17 00:56:38.422806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-17 00:56:38.422810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-17 00:56:38.422814 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.422818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-17 00:56:38.422822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-17 00:56:38.422826 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.422829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-17 00:56:38.422844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-17 00:56:38.422849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-17 00:56:38.422853 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.422857 | orchestrator | 2026-03-17 00:56:38.422860 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-17 00:56:38.422864 | orchestrator | Tuesday 17 March 2026 00:55:23 +0000 (0:00:00.789) 0:04:40.542 ********* 2026-03-17 00:56:38.422868 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.422872 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.422876 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.422879 | orchestrator | 2026-03-17 00:56:38.422883 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-17 00:56:38.422887 | orchestrator | Tuesday 17 March 2026 00:55:24 +0000 (0:00:00.645) 0:04:41.188 ********* 2026-03-17 00:56:38.422890 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.422894 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.422898 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.422902 | orchestrator | 2026-03-17 00:56:38.422905 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-17 00:56:38.422910 | orchestrator | Tuesday 17 March 2026 00:55:25 +0000 (0:00:01.114) 0:04:42.302 ********* 2026-03-17 00:56:38.422913 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:56:38.422917 | orchestrator | 2026-03-17 00:56:38.422921 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-17 00:56:38.422924 | orchestrator | Tuesday 17 March 2026 00:55:26 +0000 (0:00:01.282) 0:04:43.584 ********* 2026-03-17 00:56:38.422929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-17 00:56:38.422936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 00:56:38.422943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-17 00:56:38.422947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:56:38.422963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 00:56:38.422970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:56:38.422976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:56:38.422987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 00:56:38.422994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:56:38.423003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 00:56:38.423009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-17 00:56:38.423049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 00:56:38.423055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:56:38.423061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:56:38.423072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 00:56:38.423082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-17 00:56:38.423088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-17 00:56:38.423109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:56:38.423116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:56:38.423123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 00:56:38.423136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-17 00:56:38.423146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-17 00:56:38.423153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:56:38.423163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:56:38.423169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 00:56:38.423175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-17 00:56:38.423186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-17 00:56:38.423196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:56:38.423202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:56:38.423208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 00:56:38.423214 | orchestrator | 2026-03-17 00:56:38.423225 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-17 00:56:38.423231 | orchestrator | Tuesday 17 March 2026 00:55:30 +0000 (0:00:04.114) 0:04:47.698 ********* 2026-03-17 00:56:38.423238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-17 00:56:38.423250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 00:56:38.423257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:56:38.423262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:56:38.423271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 00:56:38.423276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-17 00:56:38.423283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-17 00:56:38.423291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:56:38.423295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:56:38.423299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 00:56:38.423303 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.423307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-17 00:56:38.423333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 00:56:38.423343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:56:38.423356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:56:38.423363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 00:56:38.423370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-17 00:56:38.423380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-17 00:56:38.423387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-17 00:56:38.423397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:56:38.423408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 00:56:38.423415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:56:38.423421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:56:38.423427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 00:56:38.423434 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.423443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:56:38.423450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 00:56:38.423459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-17 00:56:38.423468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-17 00:56:38.423472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:56:38.423476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:56:38.423482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 00:56:38.423486 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.423490 | orchestrator | 2026-03-17 00:56:38.423494 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-17 00:56:38.423498 | orchestrator | Tuesday 17 March 2026 00:55:31 +0000 (0:00:01.012) 0:04:48.710 ********* 2026-03-17 00:56:38.423502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-17 00:56:38.423506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-17 00:56:38.423511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-17 00:56:38.423520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-17 00:56:38.423526 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.423532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-17 00:56:38.423536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-17 00:56:38.423540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-17 00:56:38.423544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-17 00:56:38.423548 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.423551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-17 00:56:38.423555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-17 00:56:38.423559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-17 00:56:38.423563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-17 00:56:38.423567 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.423570 | orchestrator | 2026-03-17 00:56:38.423574 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-17 00:56:38.423578 | orchestrator | Tuesday 17 March 2026 00:55:32 +0000 (0:00:00.852) 0:04:49.563 ********* 2026-03-17 00:56:38.423582 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.423589 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.423593 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.423596 | orchestrator | 2026-03-17 00:56:38.423600 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-17 00:56:38.423604 | orchestrator | Tuesday 17 March 2026 00:55:33 +0000 (0:00:00.380) 0:04:49.943 ********* 2026-03-17 00:56:38.423607 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.423611 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.423615 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.423618 | orchestrator | 2026-03-17 00:56:38.423625 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-17 00:56:38.423632 | orchestrator | Tuesday 17 March 2026 00:55:34 +0000 (0:00:01.199) 0:04:51.142 ********* 2026-03-17 00:56:38.423636 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:56:38.423640 | orchestrator | 2026-03-17 00:56:38.423643 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-17 00:56:38.423647 | orchestrator | Tuesday 17 March 2026 00:55:35 +0000 (0:00:01.581) 0:04:52.724 ********* 2026-03-17 00:56:38.423654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:56:38.423658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:56:38.423662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:56:38.423666 | orchestrator | 2026-03-17 00:56:38.423670 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-17 00:56:38.423674 | orchestrator | Tuesday 17 March 2026 00:55:38 +0000 (0:00:02.356) 0:04:55.080 ********* 2026-03-17 00:56:38.423681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-17 00:56:38.423688 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.423695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-17 00:56:38.423699 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.423703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-17 00:56:38.423707 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.423710 | orchestrator | 2026-03-17 00:56:38.423714 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-17 00:56:38.423718 | orchestrator | Tuesday 17 March 2026 00:55:38 +0000 (0:00:00.387) 0:04:55.467 ********* 2026-03-17 00:56:38.423722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-17 00:56:38.423726 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.423729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-17 00:56:38.423733 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.423737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-17 00:56:38.423740 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.423747 | orchestrator | 2026-03-17 00:56:38.423751 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-17 00:56:38.423755 | orchestrator | Tuesday 17 March 2026 00:55:39 +0000 (0:00:00.967) 0:04:56.435 ********* 2026-03-17 00:56:38.423758 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.423762 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.423766 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.423769 | orchestrator | 2026-03-17 00:56:38.423773 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-17 00:56:38.423777 | orchestrator | Tuesday 17 March 2026 00:55:40 +0000 (0:00:00.429) 0:04:56.865 ********* 2026-03-17 00:56:38.423780 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.423784 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.423788 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.423791 | orchestrator | 2026-03-17 00:56:38.423795 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-17 00:56:38.423799 | orchestrator | Tuesday 17 March 2026 00:55:41 +0000 (0:00:01.360) 0:04:58.225 ********* 2026-03-17 00:56:38.423806 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:56:38.423810 | orchestrator | 2026-03-17 00:56:38.423814 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-17 00:56:38.423817 | orchestrator | Tuesday 17 March 2026 00:55:43 +0000 (0:00:01.696) 0:04:59.922 ********* 2026-03-17 00:56:38.423821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-17 00:56:38.423828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-17 00:56:38.423833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-17 00:56:38.423841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-17 00:56:38.423848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-17 00:56:38.423855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-17 00:56:38.423859 | orchestrator | 2026-03-17 00:56:38.423862 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-17 00:56:38.423866 | orchestrator | Tuesday 17 March 2026 00:55:48 +0000 (0:00:05.455) 0:05:05.377 ********* 2026-03-17 00:56:38.423870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-17 00:56:38.423877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-17 00:56:38.423881 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.423887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-17 00:56:38.423894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-17 00:56:38.423898 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.423902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-17 00:56:38.423910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-17 00:56:38.423914 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.423918 | orchestrator | 2026-03-17 00:56:38.423922 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-17 00:56:38.423926 | orchestrator | Tuesday 17 March 2026 00:55:49 +0000 (0:00:00.617) 0:05:05.994 ********* 2026-03-17 00:56:38.423929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-17 00:56:38.423936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-17 00:56:38.423940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-17 00:56:38.423944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-17 00:56:38.423947 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.423951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-17 00:56:38.423955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-17 00:56:38.423959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-17 00:56:38.423965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-17 00:56:38.423969 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.423973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-17 00:56:38.423976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-17 00:56:38.423984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-17 00:56:38.423987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-17 00:56:38.423991 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.423995 | orchestrator | 2026-03-17 00:56:38.423999 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-17 00:56:38.424003 | orchestrator | Tuesday 17 March 2026 00:55:50 +0000 (0:00:01.728) 0:05:07.722 ********* 2026-03-17 00:56:38.424007 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.424010 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.424033 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.424036 | orchestrator | 2026-03-17 00:56:38.424040 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-17 00:56:38.424044 | orchestrator | Tuesday 17 March 2026 00:55:52 +0000 (0:00:01.344) 0:05:09.067 ********* 2026-03-17 00:56:38.424048 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.424051 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.424055 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.424059 | orchestrator | 2026-03-17 00:56:38.424063 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-17 00:56:38.424066 | orchestrator | Tuesday 17 March 2026 00:55:54 +0000 (0:00:02.130) 0:05:11.198 ********* 2026-03-17 00:56:38.424070 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.424074 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.424078 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.424081 | orchestrator | 2026-03-17 00:56:38.424085 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-17 00:56:38.424089 | orchestrator | Tuesday 17 March 2026 00:55:54 +0000 (0:00:00.331) 0:05:11.529 ********* 2026-03-17 00:56:38.424093 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.424097 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.424100 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.424104 | orchestrator | 2026-03-17 00:56:38.424108 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-17 00:56:38.424112 | orchestrator | Tuesday 17 March 2026 00:55:55 +0000 (0:00:00.317) 0:05:11.847 ********* 2026-03-17 00:56:38.424115 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.424119 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.424123 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.424127 | orchestrator | 2026-03-17 00:56:38.424130 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-17 00:56:38.424134 | orchestrator | Tuesday 17 March 2026 00:55:55 +0000 (0:00:00.636) 0:05:12.484 ********* 2026-03-17 00:56:38.424138 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.424142 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.424148 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.424151 | orchestrator | 2026-03-17 00:56:38.424155 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-17 00:56:38.424159 | orchestrator | Tuesday 17 March 2026 00:55:56 +0000 (0:00:00.318) 0:05:12.803 ********* 2026-03-17 00:56:38.424162 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.424166 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.424170 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.424174 | orchestrator | 2026-03-17 00:56:38.424177 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-17 00:56:38.424181 | orchestrator | Tuesday 17 March 2026 00:55:56 +0000 (0:00:00.315) 0:05:13.118 ********* 2026-03-17 00:56:38.424189 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.424193 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.424197 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.424201 | orchestrator | 2026-03-17 00:56:38.424204 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-17 00:56:38.424208 | orchestrator | Tuesday 17 March 2026 00:55:57 +0000 (0:00:00.799) 0:05:13.918 ********* 2026-03-17 00:56:38.424212 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:38.424216 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:38.424220 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:38.424223 | orchestrator | 2026-03-17 00:56:38.424227 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-17 00:56:38.424231 | orchestrator | Tuesday 17 March 2026 00:55:57 +0000 (0:00:00.676) 0:05:14.595 ********* 2026-03-17 00:56:38.424234 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:38.424238 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:38.424242 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:38.424246 | orchestrator | 2026-03-17 00:56:38.424249 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-17 00:56:38.424253 | orchestrator | Tuesday 17 March 2026 00:55:58 +0000 (0:00:00.347) 0:05:14.942 ********* 2026-03-17 00:56:38.424257 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:38.424260 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:38.424264 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:38.424268 | orchestrator | 2026-03-17 00:56:38.424274 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-17 00:56:38.424278 | orchestrator | Tuesday 17 March 2026 00:55:59 +0000 (0:00:00.865) 0:05:15.808 ********* 2026-03-17 00:56:38.424282 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:38.424285 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:38.424289 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:38.424293 | orchestrator | 2026-03-17 00:56:38.424296 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-17 00:56:38.424300 | orchestrator | Tuesday 17 March 2026 00:56:00 +0000 (0:00:01.080) 0:05:16.888 ********* 2026-03-17 00:56:38.424304 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:38.424308 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:38.424311 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:38.424315 | orchestrator | 2026-03-17 00:56:38.424319 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-17 00:56:38.424322 | orchestrator | Tuesday 17 March 2026 00:56:00 +0000 (0:00:00.799) 0:05:17.687 ********* 2026-03-17 00:56:38.424326 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.424330 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.424334 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.424337 | orchestrator | 2026-03-17 00:56:38.424341 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-17 00:56:38.424345 | orchestrator | Tuesday 17 March 2026 00:56:09 +0000 (0:00:08.833) 0:05:26.521 ********* 2026-03-17 00:56:38.424348 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:38.424352 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:38.424356 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:38.424360 | orchestrator | 2026-03-17 00:56:38.424363 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-17 00:56:38.424367 | orchestrator | Tuesday 17 March 2026 00:56:10 +0000 (0:00:00.911) 0:05:27.432 ********* 2026-03-17 00:56:38.424371 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.424375 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.424378 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.424382 | orchestrator | 2026-03-17 00:56:38.424386 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-17 00:56:38.424390 | orchestrator | Tuesday 17 March 2026 00:56:24 +0000 (0:00:13.915) 0:05:41.348 ********* 2026-03-17 00:56:38.424393 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:38.424397 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:38.424404 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:38.424408 | orchestrator | 2026-03-17 00:56:38.424412 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-17 00:56:38.424415 | orchestrator | Tuesday 17 March 2026 00:56:25 +0000 (0:00:00.972) 0:05:42.321 ********* 2026-03-17 00:56:38.424419 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:38.424423 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:38.424427 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:38.424430 | orchestrator | 2026-03-17 00:56:38.424434 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-17 00:56:38.424438 | orchestrator | Tuesday 17 March 2026 00:56:29 +0000 (0:00:04.208) 0:05:46.530 ********* 2026-03-17 00:56:38.424442 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.424445 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.424449 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.424453 | orchestrator | 2026-03-17 00:56:38.424456 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-17 00:56:38.424460 | orchestrator | Tuesday 17 March 2026 00:56:30 +0000 (0:00:00.323) 0:05:46.853 ********* 2026-03-17 00:56:38.424464 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.424520 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.424524 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.424528 | orchestrator | 2026-03-17 00:56:38.424532 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-17 00:56:38.424535 | orchestrator | Tuesday 17 March 2026 00:56:30 +0000 (0:00:00.300) 0:05:47.154 ********* 2026-03-17 00:56:38.424539 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.424543 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.424547 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.424550 | orchestrator | 2026-03-17 00:56:38.424558 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-17 00:56:38.424561 | orchestrator | Tuesday 17 March 2026 00:56:30 +0000 (0:00:00.522) 0:05:47.676 ********* 2026-03-17 00:56:38.424565 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.424569 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.424573 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.424576 | orchestrator | 2026-03-17 00:56:38.424580 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-17 00:56:38.424584 | orchestrator | Tuesday 17 March 2026 00:56:31 +0000 (0:00:00.296) 0:05:47.972 ********* 2026-03-17 00:56:38.424588 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.424592 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.424595 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.424599 | orchestrator | 2026-03-17 00:56:38.424603 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-17 00:56:38.424607 | orchestrator | Tuesday 17 March 2026 00:56:31 +0000 (0:00:00.291) 0:05:48.264 ********* 2026-03-17 00:56:38.424610 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:38.424614 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:38.424618 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:38.424622 | orchestrator | 2026-03-17 00:56:38.424625 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-17 00:56:38.424629 | orchestrator | Tuesday 17 March 2026 00:56:31 +0000 (0:00:00.284) 0:05:48.549 ********* 2026-03-17 00:56:38.424633 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:38.424637 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:38.424640 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:38.424644 | orchestrator | 2026-03-17 00:56:38.424648 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-17 00:56:38.424652 | orchestrator | Tuesday 17 March 2026 00:56:36 +0000 (0:00:05.032) 0:05:53.581 ********* 2026-03-17 00:56:38.424655 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:38.424659 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:38.424663 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:38.424667 | orchestrator | 2026-03-17 00:56:38.424685 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:56:38.424693 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-17 00:56:38.424698 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-17 00:56:38.424701 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-17 00:56:38.424705 | orchestrator | 2026-03-17 00:56:38.424709 | orchestrator | 2026-03-17 00:56:38.424713 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:56:38.424717 | orchestrator | Tuesday 17 March 2026 00:56:37 +0000 (0:00:00.786) 0:05:54.368 ********* 2026-03-17 00:56:38.424720 | orchestrator | =============================================================================== 2026-03-17 00:56:38.424724 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.92s 2026-03-17 00:56:38.424728 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.83s 2026-03-17 00:56:38.424731 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.74s 2026-03-17 00:56:38.424735 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 5.51s 2026-03-17 00:56:38.424739 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.46s 2026-03-17 00:56:38.424743 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.21s 2026-03-17 00:56:38.424746 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 5.03s 2026-03-17 00:56:38.424750 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.35s 2026-03-17 00:56:38.424754 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 4.24s 2026-03-17 00:56:38.424757 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.21s 2026-03-17 00:56:38.424761 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.11s 2026-03-17 00:56:38.424765 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.04s 2026-03-17 00:56:38.424769 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 3.90s 2026-03-17 00:56:38.424772 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 3.84s 2026-03-17 00:56:38.424776 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.65s 2026-03-17 00:56:38.424780 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 3.65s 2026-03-17 00:56:38.424783 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.55s 2026-03-17 00:56:38.424787 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.48s 2026-03-17 00:56:38.424791 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.47s 2026-03-17 00:56:38.424795 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.37s 2026-03-17 00:56:38.424799 | orchestrator | 2026-03-17 00:56:38 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:41.438657 | orchestrator | 2026-03-17 00:56:41 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:56:41.440862 | orchestrator | 2026-03-17 00:56:41 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:56:41.441840 | orchestrator | 2026-03-17 00:56:41 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:56:41.442088 | orchestrator | 2026-03-17 00:56:41 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:44.480149 | orchestrator | 2026-03-17 00:56:44 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:56:44.480275 | orchestrator | 2026-03-17 00:56:44 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:56:44.481081 | orchestrator | 2026-03-17 00:56:44 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:56:44.481141 | orchestrator | 2026-03-17 00:56:44 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:47.510836 | orchestrator | 2026-03-17 00:56:47 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:56:47.511712 | orchestrator | 2026-03-17 00:56:47 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:56:47.512507 | orchestrator | 2026-03-17 00:56:47 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:56:47.512555 | orchestrator | 2026-03-17 00:56:47 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:50.537588 | orchestrator | 2026-03-17 00:56:50 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:56:50.538151 | orchestrator | 2026-03-17 00:56:50 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:56:50.538784 | orchestrator | 2026-03-17 00:56:50 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:56:50.538840 | orchestrator | 2026-03-17 00:56:50 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:53.570327 | orchestrator | 2026-03-17 00:56:53 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:56:53.570428 | orchestrator | 2026-03-17 00:56:53 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:56:53.570441 | orchestrator | 2026-03-17 00:56:53 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:56:53.570451 | orchestrator | 2026-03-17 00:56:53 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:56.599686 | orchestrator | 2026-03-17 00:56:56 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:56:56.600968 | orchestrator | 2026-03-17 00:56:56 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:56:56.601561 | orchestrator | 2026-03-17 00:56:56 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:56:56.601597 | orchestrator | 2026-03-17 00:56:56 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:59.636432 | orchestrator | 2026-03-17 00:56:59 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:56:59.636720 | orchestrator | 2026-03-17 00:56:59 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:56:59.637964 | orchestrator | 2026-03-17 00:56:59 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:56:59.638049 | orchestrator | 2026-03-17 00:56:59 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:02.669589 | orchestrator | 2026-03-17 00:57:02 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:57:02.670861 | orchestrator | 2026-03-17 00:57:02 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:57:02.672234 | orchestrator | 2026-03-17 00:57:02 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:57:02.672290 | orchestrator | 2026-03-17 00:57:02 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:05.718813 | orchestrator | 2026-03-17 00:57:05 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:57:05.718899 | orchestrator | 2026-03-17 00:57:05 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:57:05.718933 | orchestrator | 2026-03-17 00:57:05 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:57:05.718941 | orchestrator | 2026-03-17 00:57:05 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:08.749814 | orchestrator | 2026-03-17 00:57:08 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:57:08.750606 | orchestrator | 2026-03-17 00:57:08 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:57:08.753673 | orchestrator | 2026-03-17 00:57:08 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:57:08.753749 | orchestrator | 2026-03-17 00:57:08 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:11.791276 | orchestrator | 2026-03-17 00:57:11 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:57:11.791870 | orchestrator | 2026-03-17 00:57:11 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:57:11.793082 | orchestrator | 2026-03-17 00:57:11 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:57:11.793114 | orchestrator | 2026-03-17 00:57:11 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:14.827079 | orchestrator | 2026-03-17 00:57:14 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:57:14.828555 | orchestrator | 2026-03-17 00:57:14 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:57:14.829350 | orchestrator | 2026-03-17 00:57:14 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:57:14.829379 | orchestrator | 2026-03-17 00:57:14 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:17.866306 | orchestrator | 2026-03-17 00:57:17 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:57:17.866775 | orchestrator | 2026-03-17 00:57:17 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:57:17.867861 | orchestrator | 2026-03-17 00:57:17 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:57:17.867923 | orchestrator | 2026-03-17 00:57:17 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:20.912329 | orchestrator | 2026-03-17 00:57:20 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:57:20.913706 | orchestrator | 2026-03-17 00:57:20 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:57:20.916029 | orchestrator | 2026-03-17 00:57:20 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:57:20.916076 | orchestrator | 2026-03-17 00:57:20 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:23.959207 | orchestrator | 2026-03-17 00:57:23 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:57:23.961949 | orchestrator | 2026-03-17 00:57:23 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:57:23.964374 | orchestrator | 2026-03-17 00:57:23 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:57:23.964440 | orchestrator | 2026-03-17 00:57:23 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:27.001212 | orchestrator | 2026-03-17 00:57:26 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:57:27.002744 | orchestrator | 2026-03-17 00:57:27 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:57:27.004437 | orchestrator | 2026-03-17 00:57:27 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:57:27.005122 | orchestrator | 2026-03-17 00:57:27 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:30.054323 | orchestrator | 2026-03-17 00:57:30 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:57:30.055506 | orchestrator | 2026-03-17 00:57:30 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:57:30.057452 | orchestrator | 2026-03-17 00:57:30 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:57:30.057503 | orchestrator | 2026-03-17 00:57:30 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:33.095506 | orchestrator | 2026-03-17 00:57:33 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:57:33.099471 | orchestrator | 2026-03-17 00:57:33 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:57:33.101098 | orchestrator | 2026-03-17 00:57:33 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:57:33.101669 | orchestrator | 2026-03-17 00:57:33 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:36.142900 | orchestrator | 2026-03-17 00:57:36 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:57:36.143561 | orchestrator | 2026-03-17 00:57:36 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:57:36.144494 | orchestrator | 2026-03-17 00:57:36 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:57:36.145709 | orchestrator | 2026-03-17 00:57:36 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:39.191907 | orchestrator | 2026-03-17 00:57:39 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:57:39.193506 | orchestrator | 2026-03-17 00:57:39 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:57:39.195254 | orchestrator | 2026-03-17 00:57:39 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:57:39.195352 | orchestrator | 2026-03-17 00:57:39 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:42.234339 | orchestrator | 2026-03-17 00:57:42 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:57:42.234741 | orchestrator | 2026-03-17 00:57:42 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:57:42.235922 | orchestrator | 2026-03-17 00:57:42 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:57:42.235966 | orchestrator | 2026-03-17 00:57:42 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:45.276133 | orchestrator | 2026-03-17 00:57:45 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:57:45.279308 | orchestrator | 2026-03-17 00:57:45 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:57:45.281001 | orchestrator | 2026-03-17 00:57:45 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:57:45.281096 | orchestrator | 2026-03-17 00:57:45 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:48.319184 | orchestrator | 2026-03-17 00:57:48 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:57:48.320512 | orchestrator | 2026-03-17 00:57:48 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:57:48.322345 | orchestrator | 2026-03-17 00:57:48 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:57:48.322702 | orchestrator | 2026-03-17 00:57:48 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:51.379487 | orchestrator | 2026-03-17 00:57:51 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:57:51.379659 | orchestrator | 2026-03-17 00:57:51 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:57:51.380504 | orchestrator | 2026-03-17 00:57:51 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:57:51.380547 | orchestrator | 2026-03-17 00:57:51 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:54.420900 | orchestrator | 2026-03-17 00:57:54 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:57:54.422907 | orchestrator | 2026-03-17 00:57:54 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:57:54.423364 | orchestrator | 2026-03-17 00:57:54 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:57:54.423843 | orchestrator | 2026-03-17 00:57:54 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:57.471501 | orchestrator | 2026-03-17 00:57:57 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:57:57.473065 | orchestrator | 2026-03-17 00:57:57 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:57:57.475168 | orchestrator | 2026-03-17 00:57:57 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:57:57.475489 | orchestrator | 2026-03-17 00:57:57 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:00.515212 | orchestrator | 2026-03-17 00:58:00 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:58:00.518198 | orchestrator | 2026-03-17 00:58:00 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:58:00.520605 | orchestrator | 2026-03-17 00:58:00 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:58:00.520650 | orchestrator | 2026-03-17 00:58:00 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:03.562145 | orchestrator | 2026-03-17 00:58:03 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:58:03.563509 | orchestrator | 2026-03-17 00:58:03 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:58:03.565753 | orchestrator | 2026-03-17 00:58:03 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:58:03.565791 | orchestrator | 2026-03-17 00:58:03 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:06.612024 | orchestrator | 2026-03-17 00:58:06 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:58:06.614343 | orchestrator | 2026-03-17 00:58:06 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:58:06.616556 | orchestrator | 2026-03-17 00:58:06 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:58:06.616583 | orchestrator | 2026-03-17 00:58:06 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:09.662554 | orchestrator | 2026-03-17 00:58:09 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:58:09.663943 | orchestrator | 2026-03-17 00:58:09 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:58:09.665710 | orchestrator | 2026-03-17 00:58:09 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:58:09.665780 | orchestrator | 2026-03-17 00:58:09 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:12.705836 | orchestrator | 2026-03-17 00:58:12 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:58:12.707245 | orchestrator | 2026-03-17 00:58:12 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:58:12.709486 | orchestrator | 2026-03-17 00:58:12 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:58:12.709756 | orchestrator | 2026-03-17 00:58:12 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:15.757421 | orchestrator | 2026-03-17 00:58:15 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:58:15.757862 | orchestrator | 2026-03-17 00:58:15 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:58:15.759166 | orchestrator | 2026-03-17 00:58:15 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:58:15.759208 | orchestrator | 2026-03-17 00:58:15 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:18.801786 | orchestrator | 2026-03-17 00:58:18 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:58:18.803156 | orchestrator | 2026-03-17 00:58:18 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:58:18.804662 | orchestrator | 2026-03-17 00:58:18 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:58:18.804705 | orchestrator | 2026-03-17 00:58:18 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:21.848331 | orchestrator | 2026-03-17 00:58:21 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:58:21.849474 | orchestrator | 2026-03-17 00:58:21 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:58:21.850162 | orchestrator | 2026-03-17 00:58:21 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:58:21.850210 | orchestrator | 2026-03-17 00:58:21 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:24.887567 | orchestrator | 2026-03-17 00:58:24 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:58:24.889859 | orchestrator | 2026-03-17 00:58:24 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:58:24.892234 | orchestrator | 2026-03-17 00:58:24 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:58:24.892285 | orchestrator | 2026-03-17 00:58:24 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:27.940316 | orchestrator | 2026-03-17 00:58:27 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:58:27.945504 | orchestrator | 2026-03-17 00:58:27 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:58:27.948113 | orchestrator | 2026-03-17 00:58:27 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:58:27.948171 | orchestrator | 2026-03-17 00:58:27 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:31.006925 | orchestrator | 2026-03-17 00:58:31 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:58:31.007601 | orchestrator | 2026-03-17 00:58:31 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:58:31.009226 | orchestrator | 2026-03-17 00:58:31 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:58:31.009392 | orchestrator | 2026-03-17 00:58:31 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:34.052109 | orchestrator | 2026-03-17 00:58:34 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:58:34.053521 | orchestrator | 2026-03-17 00:58:34 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:58:34.055595 | orchestrator | 2026-03-17 00:58:34 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:58:34.055639 | orchestrator | 2026-03-17 00:58:34 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:37.090752 | orchestrator | 2026-03-17 00:58:37 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state STARTED 2026-03-17 00:58:37.092077 | orchestrator | 2026-03-17 00:58:37 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:58:37.093791 | orchestrator | 2026-03-17 00:58:37 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:58:37.093827 | orchestrator | 2026-03-17 00:58:37 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:40.139046 | orchestrator | 2026-03-17 00:58:40 | INFO  | Task c3e25b80-b85a-49e6-b59f-d96232221665 is in state SUCCESS 2026-03-17 00:58:40.139095 | orchestrator | 2026-03-17 00:58:40.140581 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-17 00:58:40.140610 | orchestrator | 2.16.14 2026-03-17 00:58:40.140614 | orchestrator | 2026-03-17 00:58:40.140618 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-17 00:58:40.140622 | orchestrator | 2026-03-17 00:58:40.140626 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-17 00:58:40.140630 | orchestrator | Tuesday 17 March 2026 00:48:21 +0000 (0:00:00.657) 0:00:00.657 ********* 2026-03-17 00:58:40.140634 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:58:40.140638 | orchestrator | 2026-03-17 00:58:40.140642 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-17 00:58:40.140645 | orchestrator | Tuesday 17 March 2026 00:48:22 +0000 (0:00:00.905) 0:00:01.563 ********* 2026-03-17 00:58:40.140649 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.140652 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.140656 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.140659 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.140662 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.140666 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.140669 | orchestrator | 2026-03-17 00:58:40.140673 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-17 00:58:40.140676 | orchestrator | Tuesday 17 March 2026 00:48:24 +0000 (0:00:01.388) 0:00:02.952 ********* 2026-03-17 00:58:40.140679 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.140683 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.140686 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.140690 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.140693 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.140696 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.140700 | orchestrator | 2026-03-17 00:58:40.140703 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-17 00:58:40.140706 | orchestrator | Tuesday 17 March 2026 00:48:24 +0000 (0:00:00.711) 0:00:03.667 ********* 2026-03-17 00:58:40.140732 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.140736 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.140740 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.140743 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.140746 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.140749 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.140753 | orchestrator | 2026-03-17 00:58:40.140756 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-17 00:58:40.140760 | orchestrator | Tuesday 17 March 2026 00:48:25 +0000 (0:00:01.121) 0:00:04.789 ********* 2026-03-17 00:58:40.140790 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.140795 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.140798 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.140801 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.140805 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.140808 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.140811 | orchestrator | 2026-03-17 00:58:40.140815 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-17 00:58:40.140818 | orchestrator | Tuesday 17 March 2026 00:48:26 +0000 (0:00:00.646) 0:00:05.436 ********* 2026-03-17 00:58:40.140822 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.140825 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.140828 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.140831 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.140835 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.140838 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.140841 | orchestrator | 2026-03-17 00:58:40.140845 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-17 00:58:40.140848 | orchestrator | Tuesday 17 March 2026 00:48:26 +0000 (0:00:00.501) 0:00:05.937 ********* 2026-03-17 00:58:40.140851 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.140855 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.140858 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.140861 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.140865 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.140868 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.140871 | orchestrator | 2026-03-17 00:58:40.140897 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-17 00:58:40.140901 | orchestrator | Tuesday 17 March 2026 00:48:27 +0000 (0:00:00.564) 0:00:06.501 ********* 2026-03-17 00:58:40.140905 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.140915 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.140918 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.141058 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.141066 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.141069 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.141072 | orchestrator | 2026-03-17 00:58:40.141076 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-17 00:58:40.141079 | orchestrator | Tuesday 17 March 2026 00:48:28 +0000 (0:00:00.765) 0:00:07.267 ********* 2026-03-17 00:58:40.141082 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.141086 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.141089 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.141092 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.141096 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.141099 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.141102 | orchestrator | 2026-03-17 00:58:40.141106 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-17 00:58:40.141109 | orchestrator | Tuesday 17 March 2026 00:48:29 +0000 (0:00:01.202) 0:00:08.469 ********* 2026-03-17 00:58:40.141113 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-17 00:58:40.141116 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-17 00:58:40.141119 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-17 00:58:40.141123 | orchestrator | 2026-03-17 00:58:40.141126 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-17 00:58:40.141129 | orchestrator | Tuesday 17 March 2026 00:48:29 +0000 (0:00:00.479) 0:00:08.949 ********* 2026-03-17 00:58:40.141133 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.141136 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.141139 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.141166 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.141171 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.141174 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.141182 | orchestrator | 2026-03-17 00:58:40.141186 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-17 00:58:40.141189 | orchestrator | Tuesday 17 March 2026 00:48:31 +0000 (0:00:01.397) 0:00:10.346 ********* 2026-03-17 00:58:40.141193 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-17 00:58:40.141196 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-17 00:58:40.141199 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-17 00:58:40.141203 | orchestrator | 2026-03-17 00:58:40.141206 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-17 00:58:40.141209 | orchestrator | Tuesday 17 March 2026 00:48:33 +0000 (0:00:02.493) 0:00:12.839 ********* 2026-03-17 00:58:40.141213 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-17 00:58:40.141216 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-17 00:58:40.141220 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-17 00:58:40.141223 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.141226 | orchestrator | 2026-03-17 00:58:40.141230 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-17 00:58:40.141233 | orchestrator | Tuesday 17 March 2026 00:48:34 +0000 (0:00:00.702) 0:00:13.542 ********* 2026-03-17 00:58:40.141237 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.141242 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.141245 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.141249 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.141253 | orchestrator | 2026-03-17 00:58:40.141258 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-17 00:58:40.141263 | orchestrator | Tuesday 17 March 2026 00:48:36 +0000 (0:00:01.624) 0:00:15.166 ********* 2026-03-17 00:58:40.141269 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.141276 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.141286 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.141293 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.141299 | orchestrator | 2026-03-17 00:58:40.141308 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-17 00:58:40.141313 | orchestrator | Tuesday 17 March 2026 00:48:36 +0000 (0:00:00.653) 0:00:15.820 ********* 2026-03-17 00:58:40.141326 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-17 00:48:32.111848', 'end': '2026-03-17 00:48:32.230122', 'delta': '0:00:00.118274', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.141333 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-17 00:48:32.956588', 'end': '2026-03-17 00:48:33.066843', 'delta': '0:00:00.110255', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.141336 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-17 00:48:33.649190', 'end': '2026-03-17 00:48:33.759494', 'delta': '0:00:00.110304', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.141340 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.141343 | orchestrator | 2026-03-17 00:58:40.141347 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-17 00:58:40.141350 | orchestrator | Tuesday 17 March 2026 00:48:37 +0000 (0:00:00.176) 0:00:15.996 ********* 2026-03-17 00:58:40.141354 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.141386 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.141390 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.141393 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.141397 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.141400 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.141403 | orchestrator | 2026-03-17 00:58:40.141407 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-17 00:58:40.141410 | orchestrator | Tuesday 17 March 2026 00:48:39 +0000 (0:00:02.236) 0:00:18.232 ********* 2026-03-17 00:58:40.141413 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-17 00:58:40.141417 | orchestrator | 2026-03-17 00:58:40.141420 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-17 00:58:40.141423 | orchestrator | Tuesday 17 March 2026 00:48:40 +0000 (0:00:00.783) 0:00:19.016 ********* 2026-03-17 00:58:40.141427 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.141430 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.141433 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.141437 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.141440 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.141447 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.141453 | orchestrator | 2026-03-17 00:58:40.141506 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-17 00:58:40.141540 | orchestrator | Tuesday 17 March 2026 00:48:41 +0000 (0:00:01.190) 0:00:20.207 ********* 2026-03-17 00:58:40.141547 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.141551 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.141554 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.141560 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.141564 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.141567 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.141570 | orchestrator | 2026-03-17 00:58:40.141574 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-17 00:58:40.141577 | orchestrator | Tuesday 17 March 2026 00:48:43 +0000 (0:00:02.386) 0:00:22.593 ********* 2026-03-17 00:58:40.141580 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.141584 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.141587 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.141590 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.141593 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.141597 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.141600 | orchestrator | 2026-03-17 00:58:40.141603 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-17 00:58:40.141607 | orchestrator | Tuesday 17 March 2026 00:48:44 +0000 (0:00:00.967) 0:00:23.560 ********* 2026-03-17 00:58:40.141610 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.141613 | orchestrator | 2026-03-17 00:58:40.141617 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-17 00:58:40.141620 | orchestrator | Tuesday 17 March 2026 00:48:44 +0000 (0:00:00.185) 0:00:23.745 ********* 2026-03-17 00:58:40.141623 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.141627 | orchestrator | 2026-03-17 00:58:40.141630 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-17 00:58:40.141633 | orchestrator | Tuesday 17 March 2026 00:48:45 +0000 (0:00:00.362) 0:00:24.108 ********* 2026-03-17 00:58:40.141636 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.141640 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.141643 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.141651 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.141654 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.141657 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.141661 | orchestrator | 2026-03-17 00:58:40.141664 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-17 00:58:40.141667 | orchestrator | Tuesday 17 March 2026 00:48:46 +0000 (0:00:00.857) 0:00:24.966 ********* 2026-03-17 00:58:40.141671 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.141674 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.141677 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.141681 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.141684 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.141687 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.141691 | orchestrator | 2026-03-17 00:58:40.141694 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-17 00:58:40.141697 | orchestrator | Tuesday 17 March 2026 00:48:46 +0000 (0:00:00.811) 0:00:25.778 ********* 2026-03-17 00:58:40.141701 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.141840 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.141845 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.141849 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.141853 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.141856 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.141860 | orchestrator | 2026-03-17 00:58:40.141864 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-17 00:58:40.141872 | orchestrator | Tuesday 17 March 2026 00:48:47 +0000 (0:00:00.649) 0:00:26.427 ********* 2026-03-17 00:58:40.141876 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.141880 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.141883 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.141887 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.141891 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.141895 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.141899 | orchestrator | 2026-03-17 00:58:40.141902 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-17 00:58:40.141906 | orchestrator | Tuesday 17 March 2026 00:48:48 +0000 (0:00:00.910) 0:00:27.338 ********* 2026-03-17 00:58:40.141910 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.141914 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.141918 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.142000 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.142005 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.142009 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.142035 | orchestrator | 2026-03-17 00:58:40.142040 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-17 00:58:40.142044 | orchestrator | Tuesday 17 March 2026 00:48:49 +0000 (0:00:00.632) 0:00:27.970 ********* 2026-03-17 00:58:40.142047 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.142051 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.142055 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.142059 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.142062 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.142067 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.142072 | orchestrator | 2026-03-17 00:58:40.142078 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-17 00:58:40.142086 | orchestrator | Tuesday 17 March 2026 00:48:49 +0000 (0:00:00.972) 0:00:28.942 ********* 2026-03-17 00:58:40.142093 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.142099 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.142105 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.142111 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.142117 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.142122 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.142128 | orchestrator | 2026-03-17 00:58:40.142134 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-17 00:58:40.142141 | orchestrator | Tuesday 17 March 2026 00:48:50 +0000 (0:00:00.455) 0:00:29.398 ********* 2026-03-17 00:58:40.142152 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b48309d9--c226--530e--bc23--6e205cf9651b-osd--block--b48309d9--c226--530e--bc23--6e205cf9651b', 'dm-uuid-LVM-JRKlP6LIzKroJwI7cwJekUmidQP1dkkc10P6t7SNbt0Fuu0dM1f0yCQj7KuABZzu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142158 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6efa8bf7--29bf--52cd--bcf0--0c94ef95f07f-osd--block--6efa8bf7--29bf--52cd--bcf0--0c94ef95f07f', 'dm-uuid-LVM-FTXPw6vvhD2ctiRDXpkTucTstUSMnhZjMX8frOXeKo9sMioVcXsDXqozTvTId0Xd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142198 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142202 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142209 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142212 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142216 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--13f697f5--12ba--5526--98d1--b1a9c265f800-osd--block--13f697f5--12ba--5526--98d1--b1a9c265f800', 'dm-uuid-LVM-ydCXoqPtK5pYOVor0N8MzRweku90f1HZVD2GP5etIYpm9MAS1EJkDslBAem20cjJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142225 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142242 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a0cc3c10--edeb--5a7b--849a--4273befffbf6-osd--block--a0cc3c10--edeb--5a7b--849a--4273befffbf6', 'dm-uuid-LVM-9qSBwfie3LEVyt9oLHcz7QNTZZPm9GLrQmSddtKIdhKAciSgHjqYZqMg3K9caQlF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142249 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069', 'scsi-SQEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069-part1', 'scsi-SQEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069-part14', 'scsi-SQEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069-part15', 'scsi-SQEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069-part16', 'scsi-SQEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:58:40.142256 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142265 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b48309d9--c226--530e--bc23--6e205cf9651b-osd--block--b48309d9--c226--530e--bc23--6e205cf9651b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DUgk5R-vUG2-TrLu-eqkb-PG88-nP5c-anwxd8', 'scsi-0QEMU_QEMU_HARDDISK_e46b8678-1baa-4ba8-a612-904460f97320', 'scsi-SQEMU_QEMU_HARDDISK_e46b8678-1baa-4ba8-a612-904460f97320'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:58:40.142288 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6efa8bf7--29bf--52cd--bcf0--0c94ef95f07f-osd--block--6efa8bf7--29bf--52cd--bcf0--0c94ef95f07f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JPxT8G-FQnz-R6eK-ccbB-f3TT-SWfh-BaDf8g', 'scsi-0QEMU_QEMU_HARDDISK_f95d5766-a3db-4d15-9977-785c02a190f5', 'scsi-SQEMU_QEMU_HARDDISK_f95d5766-a3db-4d15-9977-785c02a190f5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:58:40.142296 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142301 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2854fd14-3e82-4dcb-865e-ef6e028a2c86', 'scsi-SQEMU_QEMU_HARDDISK_2854fd14-3e82-4dcb-865e-ef6e028a2c86'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:58:40.142308 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:58:40.142311 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142315 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142323 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142327 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142343 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6d2c3af9--2510--58af--8cf3--0edda6a2b7a0-osd--block--6d2c3af9--2510--58af--8cf3--0edda6a2b7a0', 'dm-uuid-LVM-zrdpKXOcNezBtRtPQoFzCeCrhDD0O4ZsOCdIwGhFUEHdJo0GU6yDutRDUzO0a7XH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142347 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bc85b6b7--69fe--55db--81a6--3a78775dfc6c-osd--block--bc85b6b7--69fe--55db--81a6--3a78775dfc6c', 'dm-uuid-LVM-ryaTqHhsmATbIQsNQD2CO8W4Nnz0nYQi2hefVaE1oS6srXboYXRExhEIzPlafiha'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142351 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142354 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142357 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142361 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142460 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142477 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35', 'scsi-SQEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35-part1', 'scsi-SQEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35-part14', 'scsi-SQEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35-part15', 'scsi-SQEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35-part16', 'scsi-SQEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:58:40.142484 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142488 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142491 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--13f697f5--12ba--5526--98d1--b1a9c265f800-osd--block--13f697f5--12ba--5526--98d1--b1a9c265f800'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QLf3du-gcpq-ZiGI-Yp2L-1BCI-i7t9-Fa9c2U', 'scsi-0QEMU_QEMU_HARDDISK_9ec754d5-296d-4a8a-b6d8-e4830272a171', 'scsi-SQEMU_QEMU_HARDDISK_9ec754d5-296d-4a8a-b6d8-e4830272a171'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:58:40.142495 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142500 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a0cc3c10--edeb--5a7b--849a--4273befffbf6-osd--block--a0cc3c10--edeb--5a7b--849a--4273befffbf6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZNW1i7-xCmL-GJs5-RydD-2txE-hRH3-ixXHNA', 'scsi-0QEMU_QEMU_HARDDISK_d8ebe49d-b73b-4490-897b-f13bdc67f86d', 'scsi-SQEMU_QEMU_HARDDISK_d8ebe49d-b73b-4490-897b-f13bdc67f86d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:58:40.142505 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142517 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91ef76e-9f0f-49ef-bc09-7b70daad6579', 'scsi-SQEMU_QEMU_HARDDISK_f91ef76e-9f0f-49ef-bc09-7b70daad6579'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:58:40.142521 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.142525 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142528 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:58:40.142534 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f', 'scsi-SQEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f-part1', 'scsi-SQEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f-part14', 'scsi-SQEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f-part15', 'scsi-SQEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f-part16', 'scsi-SQEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:58:40.142548 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6d2c3af9--2510--58af--8cf3--0edda6a2b7a0-osd--block--6d2c3af9--2510--58af--8cf3--0edda6a2b7a0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oHcqGJ-S8Q8-sg2L-oLvt-4xzV-a0Yy-FcYNsg', 'scsi-0QEMU_QEMU_HARDDISK_a7deaf5a-cd70-43cd-92ab-ee3441c5e54f', 'scsi-SQEMU_QEMU_HARDDISK_a7deaf5a-cd70-43cd-92ab-ee3441c5e54f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:58:40.142590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142594 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--bc85b6b7--69fe--55db--81a6--3a78775dfc6c-osd--block--bc85b6b7--69fe--55db--81a6--3a78775dfc6c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9jeWCi-9DLp-UlhN-eHDh-lDvy-Uc3o-jpevWg', 'scsi-0QEMU_QEMU_HARDDISK_dd7becb9-0584-4efc-8944-d51272ed61fa', 'scsi-SQEMU_QEMU_HARDDISK_dd7becb9-0584-4efc-8944-d51272ed61fa'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:58:40.142598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142613 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0a90ba68-315a-4ce4-a803-8ffceb4dacc1', 'scsi-SQEMU_QEMU_HARDDISK_0a90ba68-315a-4ce4-a803-8ffceb4dacc1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:58:40.142617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142632 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:58:40.142636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_03bf2729-822f-4d31-8b12-53ff53864903', 'scsi-SQEMU_QEMU_HARDDISK_03bf2729-822f-4d31-8b12-53ff53864903'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_03bf2729-822f-4d31-8b12-53ff53864903-part1', 'scsi-SQEMU_QEMU_HARDDISK_03bf2729-822f-4d31-8b12-53ff53864903-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_03bf2729-822f-4d31-8b12-53ff53864903-part14', 'scsi-SQEMU_QEMU_HARDDISK_03bf2729-822f-4d31-8b12-53ff53864903-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_03bf2729-822f-4d31-8b12-53ff53864903-part15', 'scsi-SQEMU_QEMU_HARDDISK_03bf2729-822f-4d31-8b12-53ff53864903-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_03bf2729-822f-4d31-8b12-53ff53864903-part16', 'scsi-SQEMU_QEMU_HARDDISK_03bf2729-822f-4d31-8b12-53ff53864903-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:58:40.142650 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.142662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:58:40.142666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142673 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.142677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142680 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.142683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3189c099-cba2-49c7-8cd7-9afaa3b71213', 'scsi-SQEMU_QEMU_HARDDISK_3189c099-cba2-49c7-8cd7-9afaa3b71213'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3189c099-cba2-49c7-8cd7-9afaa3b71213-part1', 'scsi-SQEMU_QEMU_HARDDISK_3189c099-cba2-49c7-8cd7-9afaa3b71213-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3189c099-cba2-49c7-8cd7-9afaa3b71213-part14', 'scsi-SQEMU_QEMU_HARDDISK_3189c099-cba2-49c7-8cd7-9afaa3b71213-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3189c099-cba2-49c7-8cd7-9afaa3b71213-part15', 'scsi-SQEMU_QEMU_HARDDISK_3189c099-cba2-49c7-8cd7-9afaa3b71213-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3189c099-cba2-49c7-8cd7-9afaa3b71213-part16', 'scsi-SQEMU_QEMU_HARDDISK_3189c099-cba2-49c7-8cd7-9afaa3b71213-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:58:40.142751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:58:40.142758 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.142762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 00:58:40.142767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5a0ad70-e1a1-4fe9-af13-e0556c4f61c9', 'scsi-SQEMU_QEMU_HARDDISK_f5a0ad70-e1a1-4fe9-af13-e0556c4f61c9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5a0ad70-e1a1-4fe9-af13-e0556c4f61c9-part1', 'scsi-SQEMU_QEMU_HARDDISK_f5a0ad70-e1a1-4fe9-af13-e0556c4f61c9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5a0ad70-e1a1-4fe9-af13-e0556c4f61c9-part14', 'scsi-SQEMU_QEMU_HARDDISK_f5a0ad70-e1a1-4fe9-af13-e0556c4f61c9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5a0ad70-e1a1-4fe9-af13-e0556c4f61c9-part15', 'scsi-SQEMU_QEMU_HARDDISK_f5a0ad70-e1a1-4fe9-af13-e0556c4f61c9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5a0ad70-e1a1-4fe9-af13-e0556c4f61c9-part16', 'scsi-SQEMU_QEMU_HARDDISK_f5a0ad70-e1a1-4fe9-af13-e0556c4f61c9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:58:40.142780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 00:58:40.142784 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.142788 | orchestrator | 2026-03-17 00:58:40.142791 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-17 00:58:40.142795 | orchestrator | Tuesday 17 March 2026 00:48:51 +0000 (0:00:01.098) 0:00:30.497 ********* 2026-03-17 00:58:40.142798 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b48309d9--c226--530e--bc23--6e205cf9651b-osd--block--b48309d9--c226--530e--bc23--6e205cf9651b', 'dm-uuid-LVM-JRKlP6LIzKroJwI7cwJekUmidQP1dkkc10P6t7SNbt0Fuu0dM1f0yCQj7KuABZzu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.142802 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6efa8bf7--29bf--52cd--bcf0--0c94ef95f07f-osd--block--6efa8bf7--29bf--52cd--bcf0--0c94ef95f07f', 'dm-uuid-LVM-FTXPw6vvhD2ctiRDXpkTucTstUSMnhZjMX8frOXeKo9sMioVcXsDXqozTvTId0Xd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.142808 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.142814 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.142817 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.142823 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.142827 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.142830 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.142836 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.142840 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.142863 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069', 'scsi-SQEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069-part1', 'scsi-SQEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069-part14', 'scsi-SQEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069-part15', 'scsi-SQEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069-part16', 'scsi-SQEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.142868 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b48309d9--c226--530e--bc23--6e205cf9651b-osd--block--b48309d9--c226--530e--bc23--6e205cf9651b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DUgk5R-vUG2-TrLu-eqkb-PG88-nP5c-anwxd8', 'scsi-0QEMU_QEMU_HARDDISK_e46b8678-1baa-4ba8-a612-904460f97320', 'scsi-SQEMU_QEMU_HARDDISK_e46b8678-1baa-4ba8-a612-904460f97320'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.142875 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6efa8bf7--29bf--52cd--bcf0--0c94ef95f07f-osd--block--6efa8bf7--29bf--52cd--bcf0--0c94ef95f07f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JPxT8G-FQnz-R6eK-ccbB-f3TT-SWfh-BaDf8g', 'scsi-0QEMU_QEMU_HARDDISK_f95d5766-a3db-4d15-9977-785c02a190f5', 'scsi-SQEMU_QEMU_HARDDISK_f95d5766-a3db-4d15-9977-785c02a190f5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.142879 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2854fd14-3e82-4dcb-865e-ef6e028a2c86', 'scsi-SQEMU_QEMU_HARDDISK_2854fd14-3e82-4dcb-865e-ef6e028a2c86'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.142885 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--13f697f5--12ba--5526--98d1--b1a9c265f800-osd--block--13f697f5--12ba--5526--98d1--b1a9c265f800', 'dm-uuid-LVM-ydCXoqPtK5pYOVor0N8MzRweku90f1HZVD2GP5etIYpm9MAS1EJkDslBAem20cjJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.142890 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.142896 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a0cc3c10--edeb--5a7b--849a--4273befffbf6-osd--block--a0cc3c10--edeb--5a7b--849a--4273befffbf6', 'dm-uuid-LVM-9qSBwfie3LEVyt9oLHcz7QNTZZPm9GLrQmSddtKIdhKAciSgHjqYZqMg3K9caQlF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.142899 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.142905 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.142909 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.142914 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.142918 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.142922 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.142927 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.142931 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.142946 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.142955 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35', 'scsi-SQEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35-part1', 'scsi-SQEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35-part14', 'scsi-SQEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35-part15', 'scsi-SQEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35-part16', 'scsi-SQEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.142963 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--13f697f5--12ba--5526--98d1--b1a9c265f800-osd--block--13f697f5--12ba--5526--98d1--b1a9c265f800'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QLf3du-gcpq-ZiGI-Yp2L-1BCI-i7t9-Fa9c2U', 'scsi-0QEMU_QEMU_HARDDISK_9ec754d5-296d-4a8a-b6d8-e4830272a171', 'scsi-SQEMU_QEMU_HARDDISK_9ec754d5-296d-4a8a-b6d8-e4830272a171'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.142966 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6d2c3af9--2510--58af--8cf3--0edda6a2b7a0-osd--block--6d2c3af9--2510--58af--8cf3--0edda6a2b7a0', 'dm-uuid-LVM-zrdpKXOcNezBtRtPQoFzCeCrhDD0O4ZsOCdIwGhFUEHdJo0GU6yDutRDUzO0a7XH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.142972 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bc85b6b7--69fe--55db--81a6--3a78775dfc6c-osd--block--bc85b6b7--69fe--55db--81a6--3a78775dfc6c', 'dm-uuid-LVM-ryaTqHhsmATbIQsNQD2CO8W4Nnz0nYQi2hefVaE1oS6srXboYXRExhEIzPlafiha'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.142978 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a0cc3c10--edeb--5a7b--849a--4273befffbf6-osd--block--a0cc3c10--edeb--5a7b--849a--4273befffbf6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZNW1i7-xCmL-GJs5-RydD-2txE-hRH3-ixXHNA', 'scsi-0QEMU_QEMU_HARDDISK_d8ebe49d-b73b-4490-897b-f13bdc67f86d', 'scsi-SQEMU_QEMU_HARDDISK_d8ebe49d-b73b-4490-897b-f13bdc67f86d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.142981 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.142988 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91ef76e-9f0f-49ef-bc09-7b70daad6579', 'scsi-SQEMU_QEMU_HARDDISK_f91ef76e-9f0f-49ef-bc09-7b70daad6579'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.142992 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.142997 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143001 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143007 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143011 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143017 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143020 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.143024 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143027 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143033 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143037 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143043 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143049 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143052 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143056 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143063 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143069 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143078 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f', 'scsi-SQEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f-part1', 'scsi-SQEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f-part14', 'scsi-SQEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f-part15', 'scsi-SQEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f-part16', 'scsi-SQEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143088 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6d2c3af9--2510--58af--8cf3--0edda6a2b7a0-osd--block--6d2c3af9--2510--58af--8cf3--0edda6a2b7a0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oHcqGJ-S8Q8-sg2L-oLvt-4xzV-a0Yy-FcYNsg', 'scsi-0QEMU_QEMU_HARDDISK_a7deaf5a-cd70-43cd-92ab-ee3441c5e54f', 'scsi-SQEMU_QEMU_HARDDISK_a7deaf5a-cd70-43cd-92ab-ee3441c5e54f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143096 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143106 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_03bf2729-822f-4d31-8b12-53ff53864903', 'scsi-SQEMU_QEMU_HARDDISK_03bf2729-822f-4d31-8b12-53ff53864903'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_03bf2729-822f-4d31-8b12-53ff53864903-part1', 'scsi-SQEMU_QEMU_HARDDISK_03bf2729-822f-4d31-8b12-53ff53864903-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_03bf2729-822f-4d31-8b12-53ff53864903-part14', 'scsi-SQEMU_QEMU_HARDDISK_03bf2729-822f-4d31-8b12-53ff53864903-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_03bf2729-822f-4d31-8b12-53ff53864903-part15', 'scsi-SQEMU_QEMU_HARDDISK_03bf2729-822f-4d31-8b12-53ff53864903-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_03bf2729-822f-4d31-8b12-53ff53864903-part16', 'scsi-SQEMU_QEMU_HARDDISK_03bf2729-822f-4d31-8b12-53ff53864903-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143274 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143284 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--bc85b6b7--69fe--55db--81a6--3a78775dfc6c-osd--block--bc85b6b7--69fe--55db--81a6--3a78775dfc6c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9jeWCi-9DLp-UlhN-eHDh-lDvy-Uc3o-jpevWg', 'scsi-0QEMU_QEMU_HARDDISK_dd7becb9-0584-4efc-8944-d51272ed61fa', 'scsi-SQEMU_QEMU_HARDDISK_dd7becb9-0584-4efc-8944-d51272ed61fa'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143288 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143295 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143302 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0a90ba68-315a-4ce4-a803-8ffceb4dacc1', 'scsi-SQEMU_QEMU_HARDDISK_0a90ba68-315a-4ce4-a803-8ffceb4dacc1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143307 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143310 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.143315 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143323 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143331 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143345 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143351 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143357 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143363 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143371 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143377 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143384 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143394 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143399 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143405 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.143418 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3189c099-cba2-49c7-8cd7-9afaa3b71213', 'scsi-SQEMU_QEMU_HARDDISK_3189c099-cba2-49c7-8cd7-9afaa3b71213'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3189c099-cba2-49c7-8cd7-9afaa3b71213-part1', 'scsi-SQEMU_QEMU_HARDDISK_3189c099-cba2-49c7-8cd7-9afaa3b71213-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3189c099-cba2-49c7-8cd7-9afaa3b71213-part14', 'scsi-SQEMU_QEMU_HARDDISK_3189c099-cba2-49c7-8cd7-9afaa3b71213-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3189c099-cba2-49c7-8cd7-9afaa3b71213-part15', 'scsi-SQEMU_QEMU_HARDDISK_3189c099-cba2-49c7-8cd7-9afaa3b71213-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3189c099-cba2-49c7-8cd7-9afaa3b71213-part16', 'scsi-SQEMU_QEMU_HARDDISK_3189c099-cba2-49c7-8cd7-9afaa3b71213-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143431 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143437 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143441 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.143448 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5a0ad70-e1a1-4fe9-af13-e0556c4f61c9', 'scsi-SQEMU_QEMU_HARDDISK_f5a0ad70-e1a1-4fe9-af13-e0556c4f61c9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5a0ad70-e1a1-4fe9-af13-e0556c4f61c9-part1', 'scsi-SQEMU_QEMU_HARDDISK_f5a0ad70-e1a1-4fe9-af13-e0556c4f61c9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5a0ad70-e1a1-4fe9-af13-e0556c4f61c9-part14', 'scsi-SQEMU_QEMU_HARDDISK_f5a0ad70-e1a1-4fe9-af13-e0556c4f61c9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5a0ad70-e1a1-4fe9-af13-e0556c4f61c9-part15', 'scsi-SQEMU_QEMU_HARDDISK_f5a0ad70-e1a1-4fe9-af13-e0556c4f61c9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5a0ad70-e1a1-4fe9-af13-e0556c4f61c9-part16', 'scsi-SQEMU_QEMU_HARDDISK_f5a0ad70-e1a1-4fe9-af13-e0556c4f61c9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143452 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 00:58:40.143459 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.143463 | orchestrator | 2026-03-17 00:58:40.143469 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-17 00:58:40.143474 | orchestrator | Tuesday 17 March 2026 00:48:53 +0000 (0:00:01.493) 0:00:31.991 ********* 2026-03-17 00:58:40.143478 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.143481 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.143485 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.143489 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.143492 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.143496 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.143500 | orchestrator | 2026-03-17 00:58:40.143503 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-17 00:58:40.143507 | orchestrator | Tuesday 17 March 2026 00:48:54 +0000 (0:00:01.253) 0:00:33.245 ********* 2026-03-17 00:58:40.143511 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.143515 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.143518 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.143522 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.143525 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.143529 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.143533 | orchestrator | 2026-03-17 00:58:40.143537 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-17 00:58:40.143541 | orchestrator | Tuesday 17 March 2026 00:48:54 +0000 (0:00:00.580) 0:00:33.826 ********* 2026-03-17 00:58:40.143544 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.143548 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.143552 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.143556 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.143560 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.143563 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.143567 | orchestrator | 2026-03-17 00:58:40.143571 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-17 00:58:40.143575 | orchestrator | Tuesday 17 March 2026 00:48:55 +0000 (0:00:00.924) 0:00:34.750 ********* 2026-03-17 00:58:40.143578 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.143582 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.143586 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.143589 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.143766 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.143771 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.143775 | orchestrator | 2026-03-17 00:58:40.143779 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-17 00:58:40.143783 | orchestrator | Tuesday 17 March 2026 00:48:56 +0000 (0:00:00.606) 0:00:35.357 ********* 2026-03-17 00:58:40.143786 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.143790 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.143794 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.143797 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.143801 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.143805 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.143808 | orchestrator | 2026-03-17 00:58:40.143812 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-17 00:58:40.143820 | orchestrator | Tuesday 17 March 2026 00:48:57 +0000 (0:00:01.128) 0:00:36.485 ********* 2026-03-17 00:58:40.143823 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.143827 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.143831 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.143834 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.143838 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.143842 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.143846 | orchestrator | 2026-03-17 00:58:40.143849 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-17 00:58:40.143853 | orchestrator | Tuesday 17 March 2026 00:48:58 +0000 (0:00:00.546) 0:00:37.032 ********* 2026-03-17 00:58:40.143857 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-17 00:58:40.143861 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-17 00:58:40.143865 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-17 00:58:40.143868 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-17 00:58:40.143872 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-17 00:58:40.143876 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-17 00:58:40.143880 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-17 00:58:40.143884 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-17 00:58:40.143890 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-17 00:58:40.143893 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-17 00:58:40.143898 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-17 00:58:40.143901 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-17 00:58:40.143905 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-17 00:58:40.143909 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-17 00:58:40.143912 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-17 00:58:40.143916 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-17 00:58:40.143952 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-17 00:58:40.143958 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-17 00:58:40.143962 | orchestrator | 2026-03-17 00:58:40.143965 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-17 00:58:40.143969 | orchestrator | Tuesday 17 March 2026 00:49:00 +0000 (0:00:02.826) 0:00:39.859 ********* 2026-03-17 00:58:40.143973 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-17 00:58:40.143977 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-17 00:58:40.143981 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-17 00:58:40.143985 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-17 00:58:40.143989 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-17 00:58:40.143993 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-17 00:58:40.143996 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.144000 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-17 00:58:40.144013 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-17 00:58:40.144018 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-17 00:58:40.144021 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.144025 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-17 00:58:40.144029 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-17 00:58:40.144033 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-17 00:58:40.144037 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.144040 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-17 00:58:40.144044 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.144047 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-17 00:58:40.144054 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-17 00:58:40.144058 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-17 00:58:40.144062 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.144066 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-17 00:58:40.144069 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-17 00:58:40.144073 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.144077 | orchestrator | 2026-03-17 00:58:40.144081 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-17 00:58:40.144084 | orchestrator | Tuesday 17 March 2026 00:49:01 +0000 (0:00:00.591) 0:00:40.451 ********* 2026-03-17 00:58:40.144088 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.144092 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.144096 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.144100 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:58:40.144104 | orchestrator | 2026-03-17 00:58:40.144108 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-17 00:58:40.144111 | orchestrator | Tuesday 17 March 2026 00:49:02 +0000 (0:00:00.840) 0:00:41.291 ********* 2026-03-17 00:58:40.144115 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.144141 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.144146 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.144150 | orchestrator | 2026-03-17 00:58:40.144172 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-17 00:58:40.144176 | orchestrator | Tuesday 17 March 2026 00:49:02 +0000 (0:00:00.377) 0:00:41.669 ********* 2026-03-17 00:58:40.144180 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.144184 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.144280 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.144283 | orchestrator | 2026-03-17 00:58:40.144287 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-17 00:58:40.144291 | orchestrator | Tuesday 17 March 2026 00:49:03 +0000 (0:00:00.281) 0:00:41.951 ********* 2026-03-17 00:58:40.144295 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.144299 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.144302 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.144307 | orchestrator | 2026-03-17 00:58:40.144327 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-17 00:58:40.144333 | orchestrator | Tuesday 17 March 2026 00:49:03 +0000 (0:00:00.442) 0:00:42.393 ********* 2026-03-17 00:58:40.144339 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.144344 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.144349 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.144354 | orchestrator | 2026-03-17 00:58:40.144360 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-17 00:58:40.144400 | orchestrator | Tuesday 17 March 2026 00:49:04 +0000 (0:00:00.649) 0:00:43.042 ********* 2026-03-17 00:58:40.144406 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 00:58:40.144412 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 00:58:40.144418 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 00:58:40.144422 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.144425 | orchestrator | 2026-03-17 00:58:40.144429 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-17 00:58:40.144436 | orchestrator | Tuesday 17 March 2026 00:49:04 +0000 (0:00:00.403) 0:00:43.446 ********* 2026-03-17 00:58:40.144440 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 00:58:40.144444 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 00:58:40.144447 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 00:58:40.144455 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.144459 | orchestrator | 2026-03-17 00:58:40.144463 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-17 00:58:40.144466 | orchestrator | Tuesday 17 March 2026 00:49:04 +0000 (0:00:00.363) 0:00:43.810 ********* 2026-03-17 00:58:40.144470 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 00:58:40.144474 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 00:58:40.144478 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 00:58:40.144481 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.144485 | orchestrator | 2026-03-17 00:58:40.144489 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-17 00:58:40.144493 | orchestrator | Tuesday 17 March 2026 00:49:05 +0000 (0:00:00.439) 0:00:44.249 ********* 2026-03-17 00:58:40.144496 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.144500 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.144504 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.144507 | orchestrator | 2026-03-17 00:58:40.144511 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-17 00:58:40.144514 | orchestrator | Tuesday 17 March 2026 00:49:05 +0000 (0:00:00.359) 0:00:44.608 ********* 2026-03-17 00:58:40.144518 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-17 00:58:40.144522 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-17 00:58:40.144536 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-17 00:58:40.144540 | orchestrator | 2026-03-17 00:58:40.144544 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-17 00:58:40.144547 | orchestrator | Tuesday 17 March 2026 00:49:06 +0000 (0:00:01.241) 0:00:45.850 ********* 2026-03-17 00:58:40.144551 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-17 00:58:40.144555 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-17 00:58:40.144559 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-17 00:58:40.144563 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-17 00:58:40.144566 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-17 00:58:40.144570 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-17 00:58:40.144574 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-17 00:58:40.144577 | orchestrator | 2026-03-17 00:58:40.144581 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-17 00:58:40.144585 | orchestrator | Tuesday 17 March 2026 00:49:07 +0000 (0:00:00.758) 0:00:46.608 ********* 2026-03-17 00:58:40.144589 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-17 00:58:40.144592 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-17 00:58:40.144596 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-17 00:58:40.144600 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-17 00:58:40.144604 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-17 00:58:40.144607 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-17 00:58:40.144611 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-17 00:58:40.144615 | orchestrator | 2026-03-17 00:58:40.144618 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-17 00:58:40.144622 | orchestrator | Tuesday 17 March 2026 00:49:09 +0000 (0:00:01.855) 0:00:48.464 ********* 2026-03-17 00:58:40.144626 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:58:40.144645 | orchestrator | 2026-03-17 00:58:40.144650 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-17 00:58:40.144653 | orchestrator | Tuesday 17 March 2026 00:49:10 +0000 (0:00:01.165) 0:00:49.630 ********* 2026-03-17 00:58:40.144657 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:58:40.144682 | orchestrator | 2026-03-17 00:58:40.144687 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-17 00:58:40.144691 | orchestrator | Tuesday 17 March 2026 00:49:11 +0000 (0:00:01.083) 0:00:50.713 ********* 2026-03-17 00:58:40.144695 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.144698 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.144702 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.144706 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.144862 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.144867 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.144871 | orchestrator | 2026-03-17 00:58:40.144875 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-17 00:58:40.144879 | orchestrator | Tuesday 17 March 2026 00:49:12 +0000 (0:00:01.187) 0:00:51.901 ********* 2026-03-17 00:58:40.144882 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.144886 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.144893 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.144897 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.144901 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.144905 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.144909 | orchestrator | 2026-03-17 00:58:40.144912 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-17 00:58:40.144916 | orchestrator | Tuesday 17 March 2026 00:49:13 +0000 (0:00:00.794) 0:00:52.696 ********* 2026-03-17 00:58:40.144920 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.144924 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.144928 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.144943 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.144950 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.144954 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.144958 | orchestrator | 2026-03-17 00:58:40.144961 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-17 00:58:40.144965 | orchestrator | Tuesday 17 March 2026 00:49:14 +0000 (0:00:00.732) 0:00:53.428 ********* 2026-03-17 00:58:40.144969 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.144973 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.144976 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.144980 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.144984 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.144987 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.144991 | orchestrator | 2026-03-17 00:58:40.144995 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-17 00:58:40.145018 | orchestrator | Tuesday 17 March 2026 00:49:15 +0000 (0:00:00.645) 0:00:54.074 ********* 2026-03-17 00:58:40.145022 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.145026 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.145030 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.145034 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.145037 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.145050 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.145055 | orchestrator | 2026-03-17 00:58:40.145059 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-17 00:58:40.145062 | orchestrator | Tuesday 17 March 2026 00:49:16 +0000 (0:00:01.147) 0:00:55.222 ********* 2026-03-17 00:58:40.145066 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.145070 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.145074 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.145100 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.145105 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.145108 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.145112 | orchestrator | 2026-03-17 00:58:40.145116 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-17 00:58:40.145120 | orchestrator | Tuesday 17 March 2026 00:49:16 +0000 (0:00:00.551) 0:00:55.774 ********* 2026-03-17 00:58:40.145124 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.145127 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.145131 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.145135 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.145232 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.145235 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.145239 | orchestrator | 2026-03-17 00:58:40.145242 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-17 00:58:40.145246 | orchestrator | Tuesday 17 March 2026 00:49:17 +0000 (0:00:00.680) 0:00:56.455 ********* 2026-03-17 00:58:40.145249 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.145252 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.145256 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.145259 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.145263 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.145266 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.145269 | orchestrator | 2026-03-17 00:58:40.145273 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-17 00:58:40.145276 | orchestrator | Tuesday 17 March 2026 00:49:18 +0000 (0:00:01.356) 0:00:57.812 ********* 2026-03-17 00:58:40.145280 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.145283 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.145286 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.145290 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.145293 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.145297 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.145300 | orchestrator | 2026-03-17 00:58:40.145304 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-17 00:58:40.145307 | orchestrator | Tuesday 17 March 2026 00:49:20 +0000 (0:00:01.679) 0:00:59.491 ********* 2026-03-17 00:58:40.145311 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.145314 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.145317 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.145321 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.145324 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.145328 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.145331 | orchestrator | 2026-03-17 00:58:40.145334 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-17 00:58:40.145338 | orchestrator | Tuesday 17 March 2026 00:49:21 +0000 (0:00:00.821) 0:01:00.313 ********* 2026-03-17 00:58:40.145341 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.145344 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.145348 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.145351 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.145354 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.145358 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.145362 | orchestrator | 2026-03-17 00:58:40.145367 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-17 00:58:40.145372 | orchestrator | Tuesday 17 March 2026 00:49:22 +0000 (0:00:01.036) 0:01:01.349 ********* 2026-03-17 00:58:40.145378 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.145383 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.145389 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.145394 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.145400 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.145404 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.145407 | orchestrator | 2026-03-17 00:58:40.145415 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-17 00:58:40.145418 | orchestrator | Tuesday 17 March 2026 00:49:23 +0000 (0:00:00.903) 0:01:02.252 ********* 2026-03-17 00:58:40.145421 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.145425 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.145430 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.145434 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.145437 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.145440 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.145444 | orchestrator | 2026-03-17 00:58:40.145447 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-17 00:58:40.145450 | orchestrator | Tuesday 17 March 2026 00:49:24 +0000 (0:00:01.069) 0:01:03.322 ********* 2026-03-17 00:58:40.145454 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.145457 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.145460 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.145464 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.145467 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.145470 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.145474 | orchestrator | 2026-03-17 00:58:40.145478 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-17 00:58:40.145484 | orchestrator | Tuesday 17 March 2026 00:49:25 +0000 (0:00:00.693) 0:01:04.016 ********* 2026-03-17 00:58:40.145488 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.145492 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.145495 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.145499 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.145504 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.145509 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.145513 | orchestrator | 2026-03-17 00:58:40.145519 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-17 00:58:40.145525 | orchestrator | Tuesday 17 March 2026 00:49:25 +0000 (0:00:00.795) 0:01:04.811 ********* 2026-03-17 00:58:40.145530 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.145536 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.145541 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.145544 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.145562 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.145566 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.145569 | orchestrator | 2026-03-17 00:58:40.145572 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-17 00:58:40.145576 | orchestrator | Tuesday 17 March 2026 00:49:26 +0000 (0:00:00.507) 0:01:05.318 ********* 2026-03-17 00:58:40.145579 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.145582 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.145586 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.145589 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.145593 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.145596 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.145599 | orchestrator | 2026-03-17 00:58:40.145603 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-17 00:58:40.145606 | orchestrator | Tuesday 17 March 2026 00:49:26 +0000 (0:00:00.549) 0:01:05.868 ********* 2026-03-17 00:58:40.145610 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.145613 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.145616 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.145620 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.145623 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.145626 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.145630 | orchestrator | 2026-03-17 00:58:40.145633 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-17 00:58:40.145637 | orchestrator | Tuesday 17 March 2026 00:49:27 +0000 (0:00:00.839) 0:01:06.707 ********* 2026-03-17 00:58:40.145640 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.145647 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.145650 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.145653 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.145657 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.145660 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.145664 | orchestrator | 2026-03-17 00:58:40.145667 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-17 00:58:40.145670 | orchestrator | Tuesday 17 March 2026 00:49:28 +0000 (0:00:01.061) 0:01:07.769 ********* 2026-03-17 00:58:40.145674 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:58:40.145677 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:58:40.145681 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:40.145684 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:58:40.145687 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:40.145690 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:40.145694 | orchestrator | 2026-03-17 00:58:40.145697 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-17 00:58:40.145700 | orchestrator | Tuesday 17 March 2026 00:49:30 +0000 (0:00:01.703) 0:01:09.472 ********* 2026-03-17 00:58:40.145704 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:40.145751 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:58:40.145756 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:40.145759 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:58:40.145763 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:40.145766 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:58:40.145770 | orchestrator | 2026-03-17 00:58:40.145774 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-17 00:58:40.145777 | orchestrator | Tuesday 17 March 2026 00:49:34 +0000 (0:00:03.747) 0:01:13.219 ********* 2026-03-17 00:58:40.145781 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:58:40.145785 | orchestrator | 2026-03-17 00:58:40.145788 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-17 00:58:40.145792 | orchestrator | Tuesday 17 March 2026 00:49:35 +0000 (0:00:01.430) 0:01:14.650 ********* 2026-03-17 00:58:40.145795 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.145799 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.145802 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.145806 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.145809 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.145812 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.145816 | orchestrator | 2026-03-17 00:58:40.145819 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-17 00:58:40.145824 | orchestrator | Tuesday 17 March 2026 00:49:36 +0000 (0:00:00.551) 0:01:15.201 ********* 2026-03-17 00:58:40.145830 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.145842 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.145847 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.145853 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.145859 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.145865 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.145871 | orchestrator | 2026-03-17 00:58:40.145876 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-17 00:58:40.145883 | orchestrator | Tuesday 17 March 2026 00:49:37 +0000 (0:00:01.060) 0:01:16.262 ********* 2026-03-17 00:58:40.145889 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-17 00:58:40.145895 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-17 00:58:40.145902 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-17 00:58:40.145908 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-17 00:58:40.145914 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-17 00:58:40.145925 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-17 00:58:40.145941 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-17 00:58:40.145948 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-17 00:58:40.145954 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-17 00:58:40.146036 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-17 00:58:40.146070 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-17 00:58:40.146079 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-17 00:58:40.146085 | orchestrator | 2026-03-17 00:58:40.146092 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-17 00:58:40.146098 | orchestrator | Tuesday 17 March 2026 00:49:39 +0000 (0:00:01.936) 0:01:18.199 ********* 2026-03-17 00:58:40.146107 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:58:40.146114 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:58:40.146120 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:58:40.146127 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:40.146134 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:40.146140 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:40.146146 | orchestrator | 2026-03-17 00:58:40.146152 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-17 00:58:40.146158 | orchestrator | Tuesday 17 March 2026 00:49:40 +0000 (0:00:01.293) 0:01:19.493 ********* 2026-03-17 00:58:40.146164 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.146169 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.146174 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.146178 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.146182 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.146186 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.146190 | orchestrator | 2026-03-17 00:58:40.146194 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-17 00:58:40.146198 | orchestrator | Tuesday 17 March 2026 00:49:41 +0000 (0:00:01.024) 0:01:20.517 ********* 2026-03-17 00:58:40.146202 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.146206 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.146210 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.146214 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.146218 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.146222 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.146225 | orchestrator | 2026-03-17 00:58:40.146229 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-17 00:58:40.146233 | orchestrator | Tuesday 17 March 2026 00:49:42 +0000 (0:00:00.856) 0:01:21.374 ********* 2026-03-17 00:58:40.146237 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.146240 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.146244 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.146248 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.146252 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.146255 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.146259 | orchestrator | 2026-03-17 00:58:40.146263 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-17 00:58:40.146267 | orchestrator | Tuesday 17 March 2026 00:49:42 +0000 (0:00:00.539) 0:01:21.913 ********* 2026-03-17 00:58:40.146271 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:58:40.146276 | orchestrator | 2026-03-17 00:58:40.146279 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-17 00:58:40.146290 | orchestrator | Tuesday 17 March 2026 00:49:44 +0000 (0:00:01.058) 0:01:22.972 ********* 2026-03-17 00:58:40.146293 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.146300 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.146305 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.146312 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.146320 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.146327 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.146333 | orchestrator | 2026-03-17 00:58:40.146339 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-17 00:58:40.146345 | orchestrator | Tuesday 17 March 2026 00:50:45 +0000 (0:01:01.279) 0:02:24.251 ********* 2026-03-17 00:58:40.146351 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-17 00:58:40.146357 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-17 00:58:40.146364 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-17 00:58:40.146374 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.146380 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-17 00:58:40.146386 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-17 00:58:40.146391 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-17 00:58:40.146431 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.146438 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-17 00:58:40.146442 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-17 00:58:40.146447 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-17 00:58:40.146452 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.146456 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-17 00:58:40.146462 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-17 00:58:40.146484 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-17 00:58:40.146493 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.146499 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-17 00:58:40.146504 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-17 00:58:40.146509 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-17 00:58:40.146516 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.146548 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-17 00:58:40.146555 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-17 00:58:40.146561 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-17 00:58:40.146566 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.146571 | orchestrator | 2026-03-17 00:58:40.146577 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-17 00:58:40.146583 | orchestrator | Tuesday 17 March 2026 00:50:45 +0000 (0:00:00.633) 0:02:24.884 ********* 2026-03-17 00:58:40.146589 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.146594 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.146599 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.146606 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.146620 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.146627 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.146631 | orchestrator | 2026-03-17 00:58:40.146634 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-17 00:58:40.146638 | orchestrator | Tuesday 17 March 2026 00:50:46 +0000 (0:00:00.657) 0:02:25.542 ********* 2026-03-17 00:58:40.146641 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.146650 | orchestrator | 2026-03-17 00:58:40.146653 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-17 00:58:40.146657 | orchestrator | Tuesday 17 March 2026 00:50:46 +0000 (0:00:00.122) 0:02:25.665 ********* 2026-03-17 00:58:40.146660 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.146663 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.146667 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.146670 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.146676 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.146681 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.146686 | orchestrator | 2026-03-17 00:58:40.146692 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-17 00:58:40.146698 | orchestrator | Tuesday 17 March 2026 00:50:47 +0000 (0:00:00.596) 0:02:26.261 ********* 2026-03-17 00:58:40.146704 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.146709 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.146712 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.146715 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.146719 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.146722 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.146725 | orchestrator | 2026-03-17 00:58:40.146729 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-17 00:58:40.146732 | orchestrator | Tuesday 17 March 2026 00:50:47 +0000 (0:00:00.612) 0:02:26.874 ********* 2026-03-17 00:58:40.146735 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.146739 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.146742 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.146745 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.146748 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.146752 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.146755 | orchestrator | 2026-03-17 00:58:40.146758 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-17 00:58:40.146762 | orchestrator | Tuesday 17 March 2026 00:50:48 +0000 (0:00:00.452) 0:02:27.326 ********* 2026-03-17 00:58:40.146765 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.146768 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.146772 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.146775 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.146778 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.146782 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.146785 | orchestrator | 2026-03-17 00:58:40.146788 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-17 00:58:40.146792 | orchestrator | Tuesday 17 March 2026 00:50:51 +0000 (0:00:03.053) 0:02:30.379 ********* 2026-03-17 00:58:40.146795 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.146798 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.146801 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.146805 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.146815 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.146819 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.146827 | orchestrator | 2026-03-17 00:58:40.146834 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-17 00:58:40.146838 | orchestrator | Tuesday 17 March 2026 00:50:52 +0000 (0:00:00.578) 0:02:30.958 ********* 2026-03-17 00:58:40.146845 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:58:40.146850 | orchestrator | 2026-03-17 00:58:40.146853 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-17 00:58:40.146856 | orchestrator | Tuesday 17 March 2026 00:50:53 +0000 (0:00:01.071) 0:02:32.030 ********* 2026-03-17 00:58:40.146860 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.146863 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.146866 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.146872 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.146876 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.146879 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.146882 | orchestrator | 2026-03-17 00:58:40.146886 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-17 00:58:40.146889 | orchestrator | Tuesday 17 March 2026 00:50:53 +0000 (0:00:00.627) 0:02:32.657 ********* 2026-03-17 00:58:40.146892 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.146909 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.146915 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.146920 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.146925 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.146957 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.146964 | orchestrator | 2026-03-17 00:58:40.146969 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-17 00:58:40.146975 | orchestrator | Tuesday 17 March 2026 00:50:54 +0000 (0:00:00.573) 0:02:33.231 ********* 2026-03-17 00:58:40.146980 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.146985 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.147016 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.147038 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.147042 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.147045 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.147048 | orchestrator | 2026-03-17 00:58:40.147052 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-17 00:58:40.147055 | orchestrator | Tuesday 17 March 2026 00:50:54 +0000 (0:00:00.713) 0:02:33.945 ********* 2026-03-17 00:58:40.147071 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.147074 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.147078 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.147081 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.147085 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.147088 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.147091 | orchestrator | 2026-03-17 00:58:40.147095 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-17 00:58:40.147098 | orchestrator | Tuesday 17 March 2026 00:50:55 +0000 (0:00:00.618) 0:02:34.563 ********* 2026-03-17 00:58:40.147101 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.147114 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.147118 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.147121 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.147125 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.147128 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.147132 | orchestrator | 2026-03-17 00:58:40.147135 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-17 00:58:40.147138 | orchestrator | Tuesday 17 March 2026 00:50:56 +0000 (0:00:00.676) 0:02:35.240 ********* 2026-03-17 00:58:40.147142 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.147147 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.147153 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.147158 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.147163 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.147168 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.147174 | orchestrator | 2026-03-17 00:58:40.147179 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-17 00:58:40.147185 | orchestrator | Tuesday 17 March 2026 00:50:56 +0000 (0:00:00.528) 0:02:35.769 ********* 2026-03-17 00:58:40.147192 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.147198 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.147204 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.147210 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.147216 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.147226 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.147229 | orchestrator | 2026-03-17 00:58:40.147232 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-17 00:58:40.147237 | orchestrator | Tuesday 17 March 2026 00:50:57 +0000 (0:00:00.819) 0:02:36.588 ********* 2026-03-17 00:58:40.147242 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.147248 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.147253 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.147259 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.147264 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.147269 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.147275 | orchestrator | 2026-03-17 00:58:40.147281 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-17 00:58:40.147295 | orchestrator | Tuesday 17 March 2026 00:50:58 +0000 (0:00:00.728) 0:02:37.317 ********* 2026-03-17 00:58:40.147299 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.147303 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.147306 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.147310 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.147313 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.147317 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.147320 | orchestrator | 2026-03-17 00:58:40.147337 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-17 00:58:40.147341 | orchestrator | Tuesday 17 March 2026 00:50:59 +0000 (0:00:01.186) 0:02:38.504 ********* 2026-03-17 00:58:40.147345 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:58:40.147349 | orchestrator | 2026-03-17 00:58:40.147358 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-17 00:58:40.147365 | orchestrator | Tuesday 17 March 2026 00:51:00 +0000 (0:00:01.161) 0:02:39.665 ********* 2026-03-17 00:58:40.147368 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-17 00:58:40.147372 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-17 00:58:40.147375 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-17 00:58:40.147378 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-17 00:58:40.147394 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-17 00:58:40.147398 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-17 00:58:40.147401 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-17 00:58:40.147405 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-17 00:58:40.147410 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-17 00:58:40.147416 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-17 00:58:40.147422 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-17 00:58:40.147428 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-17 00:58:40.147435 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-17 00:58:40.147442 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-17 00:58:40.147455 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-17 00:58:40.147459 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-17 00:58:40.147463 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-17 00:58:40.147466 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-17 00:58:40.147492 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-17 00:58:40.147499 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-17 00:58:40.147505 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-17 00:58:40.147511 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-17 00:58:40.147517 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-17 00:58:40.147528 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-17 00:58:40.147534 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-17 00:58:40.147537 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-17 00:58:40.147541 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-17 00:58:40.147544 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-17 00:58:40.147547 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-17 00:58:40.147551 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-17 00:58:40.147554 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-17 00:58:40.147557 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-17 00:58:40.147560 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-17 00:58:40.147564 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-17 00:58:40.147567 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-17 00:58:40.147570 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-17 00:58:40.147574 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-17 00:58:40.147577 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-17 00:58:40.147580 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-17 00:58:40.147584 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-17 00:58:40.147587 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-17 00:58:40.147590 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-17 00:58:40.147593 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-17 00:58:40.147602 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-17 00:58:40.147606 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-17 00:58:40.147609 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-17 00:58:40.147613 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-17 00:58:40.147616 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-17 00:58:40.147619 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-17 00:58:40.147622 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-17 00:58:40.147626 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-17 00:58:40.147629 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-17 00:58:40.147632 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-17 00:58:40.147635 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-17 00:58:40.147639 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-17 00:58:40.147658 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-17 00:58:40.147663 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-17 00:58:40.147666 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-17 00:58:40.147670 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-17 00:58:40.147673 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-17 00:58:40.147679 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-17 00:58:40.147683 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-17 00:58:40.147686 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-17 00:58:40.147689 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-17 00:58:40.147695 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-17 00:58:40.147699 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-17 00:58:40.147702 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-17 00:58:40.147705 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-17 00:58:40.147709 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-17 00:58:40.147712 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-17 00:58:40.147715 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-17 00:58:40.147719 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-17 00:58:40.147722 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-17 00:58:40.147725 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-17 00:58:40.147729 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-17 00:58:40.147732 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-17 00:58:40.147751 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-17 00:58:40.147755 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-17 00:58:40.147758 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-17 00:58:40.147762 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-17 00:58:40.147765 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-17 00:58:40.147768 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-17 00:58:40.147772 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-17 00:58:40.147775 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-17 00:58:40.147778 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-17 00:58:40.147782 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-17 00:58:40.147785 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-17 00:58:40.147788 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-17 00:58:40.147792 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-17 00:58:40.147795 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-17 00:58:40.147799 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-17 00:58:40.147805 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-17 00:58:40.147812 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-17 00:58:40.147820 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-17 00:58:40.147826 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-17 00:58:40.147831 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-17 00:58:40.147836 | orchestrator | 2026-03-17 00:58:40.147841 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-17 00:58:40.147846 | orchestrator | Tuesday 17 March 2026 00:51:08 +0000 (0:00:07.484) 0:02:47.150 ********* 2026-03-17 00:58:40.147852 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.147857 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.147863 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.147876 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:58:40.147882 | orchestrator | 2026-03-17 00:58:40.147888 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-17 00:58:40.147893 | orchestrator | Tuesday 17 March 2026 00:51:08 +0000 (0:00:00.787) 0:02:47.937 ********* 2026-03-17 00:58:40.147908 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-17 00:58:40.147918 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-17 00:58:40.147924 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-17 00:58:40.147972 | orchestrator | 2026-03-17 00:58:40.147979 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-17 00:58:40.147984 | orchestrator | Tuesday 17 March 2026 00:51:09 +0000 (0:00:00.753) 0:02:48.690 ********* 2026-03-17 00:58:40.147990 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-17 00:58:40.147996 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-17 00:58:40.148005 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-17 00:58:40.148011 | orchestrator | 2026-03-17 00:58:40.148017 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-17 00:58:40.148024 | orchestrator | Tuesday 17 March 2026 00:51:11 +0000 (0:00:01.371) 0:02:50.062 ********* 2026-03-17 00:58:40.148029 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.148035 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.148041 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.148046 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.148053 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.148058 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.148064 | orchestrator | 2026-03-17 00:58:40.148070 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-17 00:58:40.148075 | orchestrator | Tuesday 17 March 2026 00:51:11 +0000 (0:00:00.533) 0:02:50.595 ********* 2026-03-17 00:58:40.148080 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.148086 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.148092 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.148098 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.148110 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.148116 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.148122 | orchestrator | 2026-03-17 00:58:40.148132 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-17 00:58:40.148137 | orchestrator | Tuesday 17 March 2026 00:51:12 +0000 (0:00:01.258) 0:02:51.854 ********* 2026-03-17 00:58:40.148142 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.148148 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.148153 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.148158 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.148164 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.148170 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.148175 | orchestrator | 2026-03-17 00:58:40.148207 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-17 00:58:40.148213 | orchestrator | Tuesday 17 March 2026 00:51:13 +0000 (0:00:00.866) 0:02:52.721 ********* 2026-03-17 00:58:40.148218 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.148224 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.148228 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.148233 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.148239 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.148244 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.148249 | orchestrator | 2026-03-17 00:58:40.148254 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-17 00:58:40.148259 | orchestrator | Tuesday 17 March 2026 00:51:14 +0000 (0:00:01.038) 0:02:53.759 ********* 2026-03-17 00:58:40.148264 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.148275 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.148281 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.148286 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.148290 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.148296 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.148301 | orchestrator | 2026-03-17 00:58:40.148306 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-17 00:58:40.148312 | orchestrator | Tuesday 17 March 2026 00:51:15 +0000 (0:00:00.584) 0:02:54.344 ********* 2026-03-17 00:58:40.148318 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.148324 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.148329 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.148333 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.148339 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.148345 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.148350 | orchestrator | 2026-03-17 00:58:40.148355 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-17 00:58:40.148360 | orchestrator | Tuesday 17 March 2026 00:51:16 +0000 (0:00:00.717) 0:02:55.061 ********* 2026-03-17 00:58:40.148366 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.148371 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.148376 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.148382 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.148387 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.148393 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.148397 | orchestrator | 2026-03-17 00:58:40.148400 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-17 00:58:40.148403 | orchestrator | Tuesday 17 March 2026 00:51:16 +0000 (0:00:00.561) 0:02:55.622 ********* 2026-03-17 00:58:40.148407 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.148410 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.148413 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.148416 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.148420 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.148423 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.148426 | orchestrator | 2026-03-17 00:58:40.148430 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-17 00:58:40.148433 | orchestrator | Tuesday 17 March 2026 00:51:17 +0000 (0:00:00.601) 0:02:56.224 ********* 2026-03-17 00:58:40.148436 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.148439 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.148442 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.148445 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.148449 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.148452 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.148455 | orchestrator | 2026-03-17 00:58:40.148458 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-17 00:58:40.148461 | orchestrator | Tuesday 17 March 2026 00:51:20 +0000 (0:00:03.119) 0:02:59.344 ********* 2026-03-17 00:58:40.148464 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.148468 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.148471 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.148474 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.148477 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.148480 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.148483 | orchestrator | 2026-03-17 00:58:40.148487 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-17 00:58:40.148497 | orchestrator | Tuesday 17 March 2026 00:51:21 +0000 (0:00:00.897) 0:03:00.241 ********* 2026-03-17 00:58:40.148500 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.148503 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.148506 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.148513 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.148517 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.148520 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.148523 | orchestrator | 2026-03-17 00:58:40.148526 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-17 00:58:40.148529 | orchestrator | Tuesday 17 March 2026 00:51:22 +0000 (0:00:00.745) 0:03:00.987 ********* 2026-03-17 00:58:40.148533 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.148536 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.148539 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.148542 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.148545 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.148548 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.148551 | orchestrator | 2026-03-17 00:58:40.148555 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-17 00:58:40.148558 | orchestrator | Tuesday 17 March 2026 00:51:22 +0000 (0:00:00.632) 0:03:01.619 ********* 2026-03-17 00:58:40.148561 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-17 00:58:40.148565 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-17 00:58:40.148568 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-17 00:58:40.148571 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.148594 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.148598 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.148601 | orchestrator | 2026-03-17 00:58:40.148604 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-17 00:58:40.148608 | orchestrator | Tuesday 17 March 2026 00:51:23 +0000 (0:00:00.759) 0:03:02.378 ********* 2026-03-17 00:58:40.148612 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-17 00:58:40.148617 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-17 00:58:40.148620 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-17 00:58:40.148624 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-17 00:58:40.148627 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.148630 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-17 00:58:40.148633 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.148637 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-17 00:58:40.148642 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.148646 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.148649 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.148652 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.148655 | orchestrator | 2026-03-17 00:58:40.148659 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-17 00:58:40.148662 | orchestrator | Tuesday 17 March 2026 00:51:24 +0000 (0:00:00.796) 0:03:03.175 ********* 2026-03-17 00:58:40.148665 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.148668 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.148671 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.148674 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.148678 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.148681 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.148684 | orchestrator | 2026-03-17 00:58:40.148689 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-17 00:58:40.148697 | orchestrator | Tuesday 17 March 2026 00:51:24 +0000 (0:00:00.559) 0:03:03.734 ********* 2026-03-17 00:58:40.148701 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.148704 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.148707 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.148710 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.148713 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.148716 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.148719 | orchestrator | 2026-03-17 00:58:40.148723 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-17 00:58:40.148726 | orchestrator | Tuesday 17 March 2026 00:51:25 +0000 (0:00:00.701) 0:03:04.436 ********* 2026-03-17 00:58:40.148729 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.148732 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.148735 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.148739 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.148742 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.148745 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.148748 | orchestrator | 2026-03-17 00:58:40.148751 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-17 00:58:40.148754 | orchestrator | Tuesday 17 March 2026 00:51:25 +0000 (0:00:00.504) 0:03:04.941 ********* 2026-03-17 00:58:40.148757 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.148760 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.148764 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.148767 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.148770 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.148773 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.148776 | orchestrator | 2026-03-17 00:58:40.148779 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-17 00:58:40.148793 | orchestrator | Tuesday 17 March 2026 00:51:26 +0000 (0:00:00.585) 0:03:05.526 ********* 2026-03-17 00:58:40.148797 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.148800 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.148804 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.148807 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.148810 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.148813 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.148816 | orchestrator | 2026-03-17 00:58:40.148819 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-17 00:58:40.148822 | orchestrator | Tuesday 17 March 2026 00:51:27 +0000 (0:00:00.597) 0:03:06.123 ********* 2026-03-17 00:58:40.148826 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.148829 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.148835 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.148838 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.148841 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.148844 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.148847 | orchestrator | 2026-03-17 00:58:40.148850 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-17 00:58:40.148854 | orchestrator | Tuesday 17 March 2026 00:51:27 +0000 (0:00:00.699) 0:03:06.823 ********* 2026-03-17 00:58:40.148857 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 00:58:40.148860 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 00:58:40.148863 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 00:58:40.148866 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.148869 | orchestrator | 2026-03-17 00:58:40.148873 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-17 00:58:40.148876 | orchestrator | Tuesday 17 March 2026 00:51:28 +0000 (0:00:00.334) 0:03:07.158 ********* 2026-03-17 00:58:40.148879 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 00:58:40.148882 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 00:58:40.148885 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 00:58:40.148888 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.148892 | orchestrator | 2026-03-17 00:58:40.148895 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-17 00:58:40.148898 | orchestrator | Tuesday 17 March 2026 00:51:28 +0000 (0:00:00.308) 0:03:07.466 ********* 2026-03-17 00:58:40.148901 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 00:58:40.148904 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 00:58:40.148907 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 00:58:40.148911 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.148914 | orchestrator | 2026-03-17 00:58:40.148917 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-17 00:58:40.148920 | orchestrator | Tuesday 17 March 2026 00:51:28 +0000 (0:00:00.362) 0:03:07.829 ********* 2026-03-17 00:58:40.148923 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.148926 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.148930 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.148965 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.148971 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.148976 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.148981 | orchestrator | 2026-03-17 00:58:40.148986 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-17 00:58:40.148989 | orchestrator | Tuesday 17 March 2026 00:51:29 +0000 (0:00:00.770) 0:03:08.600 ********* 2026-03-17 00:58:40.148992 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-17 00:58:40.148996 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-17 00:58:40.148999 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-17 00:58:40.149002 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-17 00:58:40.149005 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.149008 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-17 00:58:40.149012 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.149015 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-17 00:58:40.149022 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.149025 | orchestrator | 2026-03-17 00:58:40.149028 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-17 00:58:40.149034 | orchestrator | Tuesday 17 March 2026 00:51:31 +0000 (0:00:01.780) 0:03:10.380 ********* 2026-03-17 00:58:40.149037 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:58:40.149040 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:58:40.149044 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:58:40.149047 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:40.149053 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:40.149056 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:40.149059 | orchestrator | 2026-03-17 00:58:40.149062 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-17 00:58:40.149066 | orchestrator | Tuesday 17 March 2026 00:51:33 +0000 (0:00:02.553) 0:03:12.934 ********* 2026-03-17 00:58:40.149069 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:58:40.149072 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:58:40.149075 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:58:40.149078 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:40.149082 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:40.149085 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:40.149088 | orchestrator | 2026-03-17 00:58:40.149091 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-17 00:58:40.149094 | orchestrator | Tuesday 17 March 2026 00:51:35 +0000 (0:00:01.216) 0:03:14.151 ********* 2026-03-17 00:58:40.149098 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.149101 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.149104 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.149107 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:58:40.149111 | orchestrator | 2026-03-17 00:58:40.149114 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-17 00:58:40.149129 | orchestrator | Tuesday 17 March 2026 00:51:36 +0000 (0:00:00.853) 0:03:15.004 ********* 2026-03-17 00:58:40.149133 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.149136 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.149139 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.149142 | orchestrator | 2026-03-17 00:58:40.149145 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-17 00:58:40.149149 | orchestrator | Tuesday 17 March 2026 00:51:36 +0000 (0:00:00.250) 0:03:15.255 ********* 2026-03-17 00:58:40.149152 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:40.149155 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:40.149158 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:40.149161 | orchestrator | 2026-03-17 00:58:40.149165 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-17 00:58:40.149168 | orchestrator | Tuesday 17 March 2026 00:51:37 +0000 (0:00:01.561) 0:03:16.817 ********* 2026-03-17 00:58:40.149171 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-17 00:58:40.149174 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-17 00:58:40.149177 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-17 00:58:40.149181 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.149184 | orchestrator | 2026-03-17 00:58:40.149187 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-17 00:58:40.149193 | orchestrator | Tuesday 17 March 2026 00:51:38 +0000 (0:00:00.924) 0:03:17.741 ********* 2026-03-17 00:58:40.149197 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.149200 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.149203 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.149207 | orchestrator | 2026-03-17 00:58:40.149210 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-17 00:58:40.149213 | orchestrator | Tuesday 17 March 2026 00:51:39 +0000 (0:00:00.286) 0:03:18.028 ********* 2026-03-17 00:58:40.149216 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.149220 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.149223 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.149226 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:58:40.149229 | orchestrator | 2026-03-17 00:58:40.149232 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-17 00:58:40.149236 | orchestrator | Tuesday 17 March 2026 00:51:39 +0000 (0:00:00.882) 0:03:18.911 ********* 2026-03-17 00:58:40.149241 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 00:58:40.149244 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 00:58:40.149247 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 00:58:40.149251 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.149254 | orchestrator | 2026-03-17 00:58:40.149257 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-17 00:58:40.149260 | orchestrator | Tuesday 17 March 2026 00:51:40 +0000 (0:00:00.323) 0:03:19.234 ********* 2026-03-17 00:58:40.149264 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.149267 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.149270 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.149273 | orchestrator | 2026-03-17 00:58:40.149276 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-17 00:58:40.149279 | orchestrator | Tuesday 17 March 2026 00:51:40 +0000 (0:00:00.246) 0:03:19.481 ********* 2026-03-17 00:58:40.149282 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.149285 | orchestrator | 2026-03-17 00:58:40.149289 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-17 00:58:40.149292 | orchestrator | Tuesday 17 March 2026 00:51:40 +0000 (0:00:00.180) 0:03:19.662 ********* 2026-03-17 00:58:40.149295 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.149298 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.149301 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.149304 | orchestrator | 2026-03-17 00:58:40.149307 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-17 00:58:40.149310 | orchestrator | Tuesday 17 March 2026 00:51:40 +0000 (0:00:00.224) 0:03:19.887 ********* 2026-03-17 00:58:40.149313 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.149316 | orchestrator | 2026-03-17 00:58:40.149319 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-17 00:58:40.149325 | orchestrator | Tuesday 17 March 2026 00:51:41 +0000 (0:00:00.191) 0:03:20.079 ********* 2026-03-17 00:58:40.149328 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.149331 | orchestrator | 2026-03-17 00:58:40.149334 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-17 00:58:40.149337 | orchestrator | Tuesday 17 March 2026 00:51:41 +0000 (0:00:00.154) 0:03:20.233 ********* 2026-03-17 00:58:40.149340 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.149344 | orchestrator | 2026-03-17 00:58:40.149347 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-17 00:58:40.149350 | orchestrator | Tuesday 17 March 2026 00:51:41 +0000 (0:00:00.083) 0:03:20.316 ********* 2026-03-17 00:58:40.149353 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.149356 | orchestrator | 2026-03-17 00:58:40.149359 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-17 00:58:40.149362 | orchestrator | Tuesday 17 March 2026 00:51:41 +0000 (0:00:00.421) 0:03:20.738 ********* 2026-03-17 00:58:40.149365 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.149368 | orchestrator | 2026-03-17 00:58:40.149371 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-17 00:58:40.149374 | orchestrator | Tuesday 17 March 2026 00:51:41 +0000 (0:00:00.188) 0:03:20.927 ********* 2026-03-17 00:58:40.149377 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 00:58:40.149381 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 00:58:40.149384 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 00:58:40.149389 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.149394 | orchestrator | 2026-03-17 00:58:40.149400 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-17 00:58:40.149418 | orchestrator | Tuesday 17 March 2026 00:51:42 +0000 (0:00:00.315) 0:03:21.243 ********* 2026-03-17 00:58:40.149429 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.149434 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.149438 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.149442 | orchestrator | 2026-03-17 00:58:40.149447 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-17 00:58:40.149452 | orchestrator | Tuesday 17 March 2026 00:51:42 +0000 (0:00:00.223) 0:03:21.467 ********* 2026-03-17 00:58:40.149465 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.149471 | orchestrator | 2026-03-17 00:58:40.149476 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-17 00:58:40.149481 | orchestrator | Tuesday 17 March 2026 00:51:42 +0000 (0:00:00.171) 0:03:21.638 ********* 2026-03-17 00:58:40.149486 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.149491 | orchestrator | 2026-03-17 00:58:40.149496 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-17 00:58:40.149501 | orchestrator | Tuesday 17 March 2026 00:51:42 +0000 (0:00:00.180) 0:03:21.818 ********* 2026-03-17 00:58:40.149506 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.149511 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.149514 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.149518 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:58:40.149521 | orchestrator | 2026-03-17 00:58:40.149524 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-17 00:58:40.149527 | orchestrator | Tuesday 17 March 2026 00:51:43 +0000 (0:00:00.859) 0:03:22.678 ********* 2026-03-17 00:58:40.149530 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.149533 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.149536 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.149540 | orchestrator | 2026-03-17 00:58:40.149543 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-17 00:58:40.149546 | orchestrator | Tuesday 17 March 2026 00:51:44 +0000 (0:00:00.276) 0:03:22.954 ********* 2026-03-17 00:58:40.149549 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:58:40.149552 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:58:40.149555 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:58:40.149558 | orchestrator | 2026-03-17 00:58:40.149561 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-17 00:58:40.149564 | orchestrator | Tuesday 17 March 2026 00:51:45 +0000 (0:00:01.291) 0:03:24.246 ********* 2026-03-17 00:58:40.149567 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 00:58:40.149570 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 00:58:40.149573 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 00:58:40.149576 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.149579 | orchestrator | 2026-03-17 00:58:40.149583 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-17 00:58:40.149586 | orchestrator | Tuesday 17 March 2026 00:51:45 +0000 (0:00:00.678) 0:03:24.924 ********* 2026-03-17 00:58:40.149589 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.149592 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.149595 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.149598 | orchestrator | 2026-03-17 00:58:40.149601 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-17 00:58:40.149604 | orchestrator | Tuesday 17 March 2026 00:51:46 +0000 (0:00:00.406) 0:03:25.330 ********* 2026-03-17 00:58:40.149607 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.149611 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.149614 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.149617 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:58:40.149620 | orchestrator | 2026-03-17 00:58:40.149623 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-17 00:58:40.149626 | orchestrator | Tuesday 17 March 2026 00:51:47 +0000 (0:00:00.715) 0:03:26.046 ********* 2026-03-17 00:58:40.149632 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.149636 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.149639 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.149642 | orchestrator | 2026-03-17 00:58:40.149645 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-17 00:58:40.149650 | orchestrator | Tuesday 17 March 2026 00:51:47 +0000 (0:00:00.407) 0:03:26.453 ********* 2026-03-17 00:58:40.149653 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:58:40.149656 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:58:40.149659 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:58:40.149663 | orchestrator | 2026-03-17 00:58:40.149666 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-17 00:58:40.149669 | orchestrator | Tuesday 17 March 2026 00:51:48 +0000 (0:00:01.302) 0:03:27.756 ********* 2026-03-17 00:58:40.149672 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 00:58:40.149675 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 00:58:40.149678 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 00:58:40.149681 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.149684 | orchestrator | 2026-03-17 00:58:40.149687 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-17 00:58:40.149690 | orchestrator | Tuesday 17 March 2026 00:51:49 +0000 (0:00:00.615) 0:03:28.371 ********* 2026-03-17 00:58:40.149694 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.149697 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.149700 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.149703 | orchestrator | 2026-03-17 00:58:40.149706 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-17 00:58:40.149709 | orchestrator | Tuesday 17 March 2026 00:51:49 +0000 (0:00:00.330) 0:03:28.702 ********* 2026-03-17 00:58:40.149712 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.149715 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.149718 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.149722 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.149725 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.149740 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.149744 | orchestrator | 2026-03-17 00:58:40.149747 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-17 00:58:40.149750 | orchestrator | Tuesday 17 March 2026 00:51:50 +0000 (0:00:00.981) 0:03:29.683 ********* 2026-03-17 00:58:40.149753 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.149756 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.149759 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.149763 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:58:40.149766 | orchestrator | 2026-03-17 00:58:40.149769 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-17 00:58:40.149772 | orchestrator | Tuesday 17 March 2026 00:51:51 +0000 (0:00:00.784) 0:03:30.468 ********* 2026-03-17 00:58:40.149775 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.149778 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.149781 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.149784 | orchestrator | 2026-03-17 00:58:40.149788 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-17 00:58:40.149793 | orchestrator | Tuesday 17 March 2026 00:51:52 +0000 (0:00:00.519) 0:03:30.988 ********* 2026-03-17 00:58:40.149798 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:40.149801 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:40.149804 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:40.149807 | orchestrator | 2026-03-17 00:58:40.149810 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-17 00:58:40.149814 | orchestrator | Tuesday 17 March 2026 00:51:53 +0000 (0:00:01.067) 0:03:32.055 ********* 2026-03-17 00:58:40.149820 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-17 00:58:40.149823 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-17 00:58:40.149826 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-17 00:58:40.149829 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.149832 | orchestrator | 2026-03-17 00:58:40.149835 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-17 00:58:40.149838 | orchestrator | Tuesday 17 March 2026 00:51:53 +0000 (0:00:00.536) 0:03:32.592 ********* 2026-03-17 00:58:40.149841 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.149844 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.149847 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.149850 | orchestrator | 2026-03-17 00:58:40.149854 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-17 00:58:40.149857 | orchestrator | 2026-03-17 00:58:40.149860 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-17 00:58:40.149863 | orchestrator | Tuesday 17 March 2026 00:51:54 +0000 (0:00:00.498) 0:03:33.090 ********* 2026-03-17 00:58:40.149866 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:58:40.149869 | orchestrator | 2026-03-17 00:58:40.149872 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-17 00:58:40.149875 | orchestrator | Tuesday 17 March 2026 00:51:54 +0000 (0:00:00.579) 0:03:33.670 ********* 2026-03-17 00:58:40.149878 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:58:40.149881 | orchestrator | 2026-03-17 00:58:40.149885 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-17 00:58:40.149888 | orchestrator | Tuesday 17 March 2026 00:51:55 +0000 (0:00:00.444) 0:03:34.114 ********* 2026-03-17 00:58:40.149891 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.149894 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.149897 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.149900 | orchestrator | 2026-03-17 00:58:40.149903 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-17 00:58:40.149906 | orchestrator | Tuesday 17 March 2026 00:51:55 +0000 (0:00:00.801) 0:03:34.915 ********* 2026-03-17 00:58:40.149909 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.149912 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.149915 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.149919 | orchestrator | 2026-03-17 00:58:40.149923 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-17 00:58:40.149928 | orchestrator | Tuesday 17 March 2026 00:51:56 +0000 (0:00:00.259) 0:03:35.175 ********* 2026-03-17 00:58:40.149943 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.149946 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.149949 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.149952 | orchestrator | 2026-03-17 00:58:40.149955 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-17 00:58:40.149958 | orchestrator | Tuesday 17 March 2026 00:51:56 +0000 (0:00:00.244) 0:03:35.420 ********* 2026-03-17 00:58:40.149961 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.149965 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.149968 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.149971 | orchestrator | 2026-03-17 00:58:40.149974 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-17 00:58:40.149977 | orchestrator | Tuesday 17 March 2026 00:51:56 +0000 (0:00:00.264) 0:03:35.684 ********* 2026-03-17 00:58:40.149980 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.149983 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.149986 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.149989 | orchestrator | 2026-03-17 00:58:40.149992 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-17 00:58:40.149998 | orchestrator | Tuesday 17 March 2026 00:51:57 +0000 (0:00:00.773) 0:03:36.457 ********* 2026-03-17 00:58:40.150001 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.150004 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.150007 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.150010 | orchestrator | 2026-03-17 00:58:40.150038 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-17 00:58:40.150042 | orchestrator | Tuesday 17 March 2026 00:51:57 +0000 (0:00:00.321) 0:03:36.779 ********* 2026-03-17 00:58:40.150057 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.150060 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.150064 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.150067 | orchestrator | 2026-03-17 00:58:40.150070 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-17 00:58:40.150073 | orchestrator | Tuesday 17 March 2026 00:51:58 +0000 (0:00:00.286) 0:03:37.066 ********* 2026-03-17 00:58:40.150076 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.150079 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.150082 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.150085 | orchestrator | 2026-03-17 00:58:40.150089 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-17 00:58:40.150092 | orchestrator | Tuesday 17 March 2026 00:51:58 +0000 (0:00:00.646) 0:03:37.712 ********* 2026-03-17 00:58:40.150095 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.150098 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.150101 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.150104 | orchestrator | 2026-03-17 00:58:40.150107 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-17 00:58:40.150110 | orchestrator | Tuesday 17 March 2026 00:51:59 +0000 (0:00:00.797) 0:03:38.509 ********* 2026-03-17 00:58:40.150114 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.150117 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.150120 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.150123 | orchestrator | 2026-03-17 00:58:40.150126 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-17 00:58:40.150129 | orchestrator | Tuesday 17 March 2026 00:51:59 +0000 (0:00:00.280) 0:03:38.790 ********* 2026-03-17 00:58:40.150132 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.150135 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.150138 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.150141 | orchestrator | 2026-03-17 00:58:40.150144 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-17 00:58:40.150148 | orchestrator | Tuesday 17 March 2026 00:52:00 +0000 (0:00:00.333) 0:03:39.124 ********* 2026-03-17 00:58:40.150151 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.150154 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.150157 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.150160 | orchestrator | 2026-03-17 00:58:40.150163 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-17 00:58:40.150166 | orchestrator | Tuesday 17 March 2026 00:52:00 +0000 (0:00:00.304) 0:03:39.429 ********* 2026-03-17 00:58:40.150169 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.150172 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.150176 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.150179 | orchestrator | 2026-03-17 00:58:40.150182 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-17 00:58:40.150185 | orchestrator | Tuesday 17 March 2026 00:52:00 +0000 (0:00:00.261) 0:03:39.690 ********* 2026-03-17 00:58:40.150188 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.150192 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.150197 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.150202 | orchestrator | 2026-03-17 00:58:40.150207 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-17 00:58:40.150211 | orchestrator | Tuesday 17 March 2026 00:52:01 +0000 (0:00:00.437) 0:03:40.128 ********* 2026-03-17 00:58:40.150224 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.150229 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.150235 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.150240 | orchestrator | 2026-03-17 00:58:40.150245 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-17 00:58:40.150249 | orchestrator | Tuesday 17 March 2026 00:52:01 +0000 (0:00:00.259) 0:03:40.387 ********* 2026-03-17 00:58:40.150255 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.150260 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.150265 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.150270 | orchestrator | 2026-03-17 00:58:40.150276 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-17 00:58:40.150280 | orchestrator | Tuesday 17 March 2026 00:52:01 +0000 (0:00:00.312) 0:03:40.700 ********* 2026-03-17 00:58:40.150286 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.150290 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.150293 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.150296 | orchestrator | 2026-03-17 00:58:40.150299 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-17 00:58:40.150305 | orchestrator | Tuesday 17 March 2026 00:52:02 +0000 (0:00:00.285) 0:03:40.985 ********* 2026-03-17 00:58:40.150308 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.150311 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.150315 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.150318 | orchestrator | 2026-03-17 00:58:40.150321 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-17 00:58:40.150324 | orchestrator | Tuesday 17 March 2026 00:52:02 +0000 (0:00:00.451) 0:03:41.437 ********* 2026-03-17 00:58:40.150327 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.150330 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.150333 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.150336 | orchestrator | 2026-03-17 00:58:40.150339 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-17 00:58:40.150342 | orchestrator | Tuesday 17 March 2026 00:52:02 +0000 (0:00:00.482) 0:03:41.920 ********* 2026-03-17 00:58:40.150345 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.150349 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.150352 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.150355 | orchestrator | 2026-03-17 00:58:40.150358 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-17 00:58:40.150361 | orchestrator | Tuesday 17 March 2026 00:52:03 +0000 (0:00:00.369) 0:03:42.289 ********* 2026-03-17 00:58:40.150364 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:58:40.150367 | orchestrator | 2026-03-17 00:58:40.150370 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-17 00:58:40.150373 | orchestrator | Tuesday 17 March 2026 00:52:04 +0000 (0:00:00.702) 0:03:42.991 ********* 2026-03-17 00:58:40.150376 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.150380 | orchestrator | 2026-03-17 00:58:40.150395 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-17 00:58:40.150399 | orchestrator | Tuesday 17 March 2026 00:52:04 +0000 (0:00:00.139) 0:03:43.131 ********* 2026-03-17 00:58:40.150402 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-17 00:58:40.150405 | orchestrator | 2026-03-17 00:58:40.150408 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-17 00:58:40.150411 | orchestrator | Tuesday 17 March 2026 00:52:05 +0000 (0:00:00.869) 0:03:44.000 ********* 2026-03-17 00:58:40.150415 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.150418 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.150421 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.150424 | orchestrator | 2026-03-17 00:58:40.150427 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-17 00:58:40.150430 | orchestrator | Tuesday 17 March 2026 00:52:05 +0000 (0:00:00.297) 0:03:44.297 ********* 2026-03-17 00:58:40.150436 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.150439 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.150442 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.150445 | orchestrator | 2026-03-17 00:58:40.150448 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-17 00:58:40.150452 | orchestrator | Tuesday 17 March 2026 00:52:05 +0000 (0:00:00.279) 0:03:44.577 ********* 2026-03-17 00:58:40.150455 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:40.150458 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:40.150461 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:40.150464 | orchestrator | 2026-03-17 00:58:40.150467 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-17 00:58:40.150470 | orchestrator | Tuesday 17 March 2026 00:52:06 +0000 (0:00:01.305) 0:03:45.882 ********* 2026-03-17 00:58:40.150473 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:40.150476 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:40.150479 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:40.150482 | orchestrator | 2026-03-17 00:58:40.150485 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-17 00:58:40.150488 | orchestrator | Tuesday 17 March 2026 00:52:07 +0000 (0:00:00.731) 0:03:46.614 ********* 2026-03-17 00:58:40.150491 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:40.150495 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:40.150498 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:40.150501 | orchestrator | 2026-03-17 00:58:40.150504 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-17 00:58:40.150507 | orchestrator | Tuesday 17 March 2026 00:52:08 +0000 (0:00:00.661) 0:03:47.276 ********* 2026-03-17 00:58:40.150510 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.150513 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.150516 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.150519 | orchestrator | 2026-03-17 00:58:40.150522 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-17 00:58:40.150525 | orchestrator | Tuesday 17 March 2026 00:52:09 +0000 (0:00:00.729) 0:03:48.005 ********* 2026-03-17 00:58:40.150529 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:40.150532 | orchestrator | 2026-03-17 00:58:40.150535 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-17 00:58:40.150538 | orchestrator | Tuesday 17 March 2026 00:52:10 +0000 (0:00:01.373) 0:03:49.378 ********* 2026-03-17 00:58:40.150541 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.150544 | orchestrator | 2026-03-17 00:58:40.150547 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-17 00:58:40.150550 | orchestrator | Tuesday 17 March 2026 00:52:11 +0000 (0:00:00.995) 0:03:50.374 ********* 2026-03-17 00:58:40.150553 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-17 00:58:40.150557 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 00:58:40.150560 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 00:58:40.150563 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-17 00:58:40.150566 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-17 00:58:40.150569 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-17 00:58:40.150572 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-17 00:58:40.150575 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-03-17 00:58:40.150581 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-17 00:58:40.150584 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-17 00:58:40.150587 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-17 00:58:40.150590 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-17 00:58:40.150593 | orchestrator | 2026-03-17 00:58:40.150596 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-17 00:58:40.150601 | orchestrator | Tuesday 17 March 2026 00:52:14 +0000 (0:00:03.427) 0:03:53.801 ********* 2026-03-17 00:58:40.150604 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:40.150607 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:40.150610 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:40.150613 | orchestrator | 2026-03-17 00:58:40.150617 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-17 00:58:40.150620 | orchestrator | Tuesday 17 March 2026 00:52:16 +0000 (0:00:01.171) 0:03:54.972 ********* 2026-03-17 00:58:40.150623 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.150626 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.150629 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.150632 | orchestrator | 2026-03-17 00:58:40.150635 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-17 00:58:40.150638 | orchestrator | Tuesday 17 March 2026 00:52:16 +0000 (0:00:00.275) 0:03:55.248 ********* 2026-03-17 00:58:40.150641 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.150644 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.150647 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.150650 | orchestrator | 2026-03-17 00:58:40.150654 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-17 00:58:40.150657 | orchestrator | Tuesday 17 March 2026 00:52:16 +0000 (0:00:00.445) 0:03:55.693 ********* 2026-03-17 00:58:40.150660 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:40.150672 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:40.150676 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:40.150679 | orchestrator | 2026-03-17 00:58:40.150682 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-17 00:58:40.150685 | orchestrator | Tuesday 17 March 2026 00:52:18 +0000 (0:00:01.509) 0:03:57.203 ********* 2026-03-17 00:58:40.150688 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:40.150691 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:40.150694 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:40.150697 | orchestrator | 2026-03-17 00:58:40.150700 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-17 00:58:40.150703 | orchestrator | Tuesday 17 March 2026 00:52:19 +0000 (0:00:01.389) 0:03:58.593 ********* 2026-03-17 00:58:40.150706 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.150709 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.150712 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.150715 | orchestrator | 2026-03-17 00:58:40.150719 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-17 00:58:40.150722 | orchestrator | Tuesday 17 March 2026 00:52:19 +0000 (0:00:00.258) 0:03:58.851 ********* 2026-03-17 00:58:40.150725 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:58:40.150728 | orchestrator | 2026-03-17 00:58:40.150731 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-17 00:58:40.150734 | orchestrator | Tuesday 17 March 2026 00:52:20 +0000 (0:00:00.654) 0:03:59.506 ********* 2026-03-17 00:58:40.150737 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.150740 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.150743 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.150746 | orchestrator | 2026-03-17 00:58:40.150749 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-17 00:58:40.150752 | orchestrator | Tuesday 17 March 2026 00:52:20 +0000 (0:00:00.280) 0:03:59.786 ********* 2026-03-17 00:58:40.150755 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.150758 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.150761 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.150764 | orchestrator | 2026-03-17 00:58:40.150767 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-17 00:58:40.150771 | orchestrator | Tuesday 17 March 2026 00:52:21 +0000 (0:00:00.298) 0:04:00.084 ********* 2026-03-17 00:58:40.150776 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:58:40.150779 | orchestrator | 2026-03-17 00:58:40.150782 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-17 00:58:40.150785 | orchestrator | Tuesday 17 March 2026 00:52:21 +0000 (0:00:00.586) 0:04:00.671 ********* 2026-03-17 00:58:40.150788 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:40.150791 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:40.150794 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:40.150797 | orchestrator | 2026-03-17 00:58:40.150800 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-17 00:58:40.150803 | orchestrator | Tuesday 17 March 2026 00:52:23 +0000 (0:00:01.403) 0:04:02.074 ********* 2026-03-17 00:58:40.150806 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:40.150809 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:40.150812 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:40.150816 | orchestrator | 2026-03-17 00:58:40.150819 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-17 00:58:40.150822 | orchestrator | Tuesday 17 March 2026 00:52:24 +0000 (0:00:01.197) 0:04:03.272 ********* 2026-03-17 00:58:40.150825 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:40.150828 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:40.150831 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:40.150834 | orchestrator | 2026-03-17 00:58:40.150837 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-17 00:58:40.150840 | orchestrator | Tuesday 17 March 2026 00:52:27 +0000 (0:00:02.904) 0:04:06.176 ********* 2026-03-17 00:58:40.150843 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:40.150846 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:40.150849 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:40.150852 | orchestrator | 2026-03-17 00:58:40.150857 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-17 00:58:40.150860 | orchestrator | Tuesday 17 March 2026 00:52:29 +0000 (0:00:02.354) 0:04:08.530 ********* 2026-03-17 00:58:40.150863 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:58:40.150866 | orchestrator | 2026-03-17 00:58:40.150869 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-17 00:58:40.150872 | orchestrator | Tuesday 17 March 2026 00:52:30 +0000 (0:00:00.565) 0:04:09.096 ********* 2026-03-17 00:58:40.150875 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.150879 | orchestrator | 2026-03-17 00:58:40.150882 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-17 00:58:40.150885 | orchestrator | Tuesday 17 March 2026 00:52:31 +0000 (0:00:01.089) 0:04:10.186 ********* 2026-03-17 00:58:40.150888 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.150891 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.150894 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.150897 | orchestrator | 2026-03-17 00:58:40.150900 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-17 00:58:40.150903 | orchestrator | Tuesday 17 March 2026 00:52:41 +0000 (0:00:10.562) 0:04:20.748 ********* 2026-03-17 00:58:40.150906 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.150909 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.150912 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.150915 | orchestrator | 2026-03-17 00:58:40.150918 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-17 00:58:40.150921 | orchestrator | Tuesday 17 March 2026 00:52:42 +0000 (0:00:00.483) 0:04:21.232 ********* 2026-03-17 00:58:40.150946 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e6c930a3c4e8ea27dd892321162d8f8854fd72b7'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-17 00:58:40.150958 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e6c930a3c4e8ea27dd892321162d8f8854fd72b7'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-17 00:58:40.150962 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e6c930a3c4e8ea27dd892321162d8f8854fd72b7'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-17 00:58:40.150966 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e6c930a3c4e8ea27dd892321162d8f8854fd72b7'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-17 00:58:40.150970 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e6c930a3c4e8ea27dd892321162d8f8854fd72b7'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-17 00:58:40.150974 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e6c930a3c4e8ea27dd892321162d8f8854fd72b7'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__e6c930a3c4e8ea27dd892321162d8f8854fd72b7'}])  2026-03-17 00:58:40.150978 | orchestrator | 2026-03-17 00:58:40.150981 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-17 00:58:40.150984 | orchestrator | Tuesday 17 March 2026 00:52:56 +0000 (0:00:13.722) 0:04:34.955 ********* 2026-03-17 00:58:40.150987 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.150990 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.150993 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.150996 | orchestrator | 2026-03-17 00:58:40.150999 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-17 00:58:40.151003 | orchestrator | Tuesday 17 March 2026 00:52:56 +0000 (0:00:00.295) 0:04:35.250 ********* 2026-03-17 00:58:40.151006 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:58:40.151009 | orchestrator | 2026-03-17 00:58:40.151014 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-17 00:58:40.151017 | orchestrator | Tuesday 17 March 2026 00:52:56 +0000 (0:00:00.674) 0:04:35.924 ********* 2026-03-17 00:58:40.151020 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.151023 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.151027 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.151030 | orchestrator | 2026-03-17 00:58:40.151033 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-17 00:58:40.151036 | orchestrator | Tuesday 17 March 2026 00:52:57 +0000 (0:00:00.357) 0:04:36.282 ********* 2026-03-17 00:58:40.151039 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.151042 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.151045 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.151049 | orchestrator | 2026-03-17 00:58:40.151054 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-17 00:58:40.151063 | orchestrator | Tuesday 17 March 2026 00:52:57 +0000 (0:00:00.426) 0:04:36.708 ********* 2026-03-17 00:58:40.151068 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-17 00:58:40.151072 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-17 00:58:40.151077 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-17 00:58:40.151082 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.151086 | orchestrator | 2026-03-17 00:58:40.151090 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-17 00:58:40.151095 | orchestrator | Tuesday 17 March 2026 00:52:58 +0000 (0:00:00.672) 0:04:37.381 ********* 2026-03-17 00:58:40.151100 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.151105 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.151109 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.151114 | orchestrator | 2026-03-17 00:58:40.151119 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-17 00:58:40.151125 | orchestrator | 2026-03-17 00:58:40.151147 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-17 00:58:40.151153 | orchestrator | Tuesday 17 March 2026 00:52:59 +0000 (0:00:00.610) 0:04:37.991 ********* 2026-03-17 00:58:40.151156 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:58:40.151160 | orchestrator | 2026-03-17 00:58:40.151163 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-17 00:58:40.151166 | orchestrator | Tuesday 17 March 2026 00:52:59 +0000 (0:00:00.404) 0:04:38.396 ********* 2026-03-17 00:58:40.151169 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:58:40.151172 | orchestrator | 2026-03-17 00:58:40.151175 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-17 00:58:40.151178 | orchestrator | Tuesday 17 March 2026 00:53:00 +0000 (0:00:00.671) 0:04:39.067 ********* 2026-03-17 00:58:40.151181 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.151185 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.151188 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.151191 | orchestrator | 2026-03-17 00:58:40.151194 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-17 00:58:40.151197 | orchestrator | Tuesday 17 March 2026 00:53:01 +0000 (0:00:01.009) 0:04:40.077 ********* 2026-03-17 00:58:40.151200 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.151203 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.151206 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.151209 | orchestrator | 2026-03-17 00:58:40.151212 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-17 00:58:40.151215 | orchestrator | Tuesday 17 March 2026 00:53:01 +0000 (0:00:00.506) 0:04:40.584 ********* 2026-03-17 00:58:40.151218 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.151222 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.151225 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.151228 | orchestrator | 2026-03-17 00:58:40.151231 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-17 00:58:40.151234 | orchestrator | Tuesday 17 March 2026 00:53:02 +0000 (0:00:00.588) 0:04:41.172 ********* 2026-03-17 00:58:40.151237 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.151240 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.151243 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.151246 | orchestrator | 2026-03-17 00:58:40.151249 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-17 00:58:40.151252 | orchestrator | Tuesday 17 March 2026 00:53:02 +0000 (0:00:00.280) 0:04:41.452 ********* 2026-03-17 00:58:40.151255 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.151258 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.151261 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.151267 | orchestrator | 2026-03-17 00:58:40.151270 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-17 00:58:40.151274 | orchestrator | Tuesday 17 March 2026 00:53:03 +0000 (0:00:00.889) 0:04:42.342 ********* 2026-03-17 00:58:40.151277 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.151280 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.151283 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.151286 | orchestrator | 2026-03-17 00:58:40.151289 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-17 00:58:40.151292 | orchestrator | Tuesday 17 March 2026 00:53:03 +0000 (0:00:00.473) 0:04:42.815 ********* 2026-03-17 00:58:40.151295 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.151298 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.151301 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.151304 | orchestrator | 2026-03-17 00:58:40.151307 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-17 00:58:40.151310 | orchestrator | Tuesday 17 March 2026 00:53:04 +0000 (0:00:00.712) 0:04:43.528 ********* 2026-03-17 00:58:40.151314 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.151317 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.151320 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.151323 | orchestrator | 2026-03-17 00:58:40.151326 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-17 00:58:40.151331 | orchestrator | Tuesday 17 March 2026 00:53:05 +0000 (0:00:00.730) 0:04:44.259 ********* 2026-03-17 00:58:40.151334 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.151337 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.151340 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.151343 | orchestrator | 2026-03-17 00:58:40.151346 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-17 00:58:40.151350 | orchestrator | Tuesday 17 March 2026 00:53:06 +0000 (0:00:00.772) 0:04:45.031 ********* 2026-03-17 00:58:40.151355 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.151360 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.151365 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.151370 | orchestrator | 2026-03-17 00:58:40.151375 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-17 00:58:40.151380 | orchestrator | Tuesday 17 March 2026 00:53:06 +0000 (0:00:00.344) 0:04:45.376 ********* 2026-03-17 00:58:40.151385 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.151390 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.151395 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.151401 | orchestrator | 2026-03-17 00:58:40.151404 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-17 00:58:40.151407 | orchestrator | Tuesday 17 March 2026 00:53:06 +0000 (0:00:00.310) 0:04:45.686 ********* 2026-03-17 00:58:40.151410 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.151413 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.151416 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.151419 | orchestrator | 2026-03-17 00:58:40.151422 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-17 00:58:40.151425 | orchestrator | Tuesday 17 March 2026 00:53:07 +0000 (0:00:00.511) 0:04:46.198 ********* 2026-03-17 00:58:40.151428 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.151431 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.151446 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.151450 | orchestrator | 2026-03-17 00:58:40.151453 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-17 00:58:40.151456 | orchestrator | Tuesday 17 March 2026 00:53:07 +0000 (0:00:00.240) 0:04:46.439 ********* 2026-03-17 00:58:40.151459 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.151463 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.151466 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.151469 | orchestrator | 2026-03-17 00:58:40.151472 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-17 00:58:40.151478 | orchestrator | Tuesday 17 March 2026 00:53:07 +0000 (0:00:00.234) 0:04:46.674 ********* 2026-03-17 00:58:40.151481 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.151484 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.151487 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.151490 | orchestrator | 2026-03-17 00:58:40.151493 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-17 00:58:40.151496 | orchestrator | Tuesday 17 March 2026 00:53:07 +0000 (0:00:00.251) 0:04:46.926 ********* 2026-03-17 00:58:40.151500 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.151503 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.151506 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.151509 | orchestrator | 2026-03-17 00:58:40.151512 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-17 00:58:40.151515 | orchestrator | Tuesday 17 March 2026 00:53:08 +0000 (0:00:00.440) 0:04:47.366 ********* 2026-03-17 00:58:40.151518 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.151521 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.151524 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.151528 | orchestrator | 2026-03-17 00:58:40.151531 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-17 00:58:40.151534 | orchestrator | Tuesday 17 March 2026 00:53:08 +0000 (0:00:00.275) 0:04:47.642 ********* 2026-03-17 00:58:40.151537 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.151540 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.151543 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.151546 | orchestrator | 2026-03-17 00:58:40.151549 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-17 00:58:40.151552 | orchestrator | Tuesday 17 March 2026 00:53:08 +0000 (0:00:00.242) 0:04:47.885 ********* 2026-03-17 00:58:40.151556 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.151559 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.151562 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.151565 | orchestrator | 2026-03-17 00:58:40.151568 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-17 00:58:40.151571 | orchestrator | Tuesday 17 March 2026 00:53:09 +0000 (0:00:00.550) 0:04:48.435 ********* 2026-03-17 00:58:40.151574 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-17 00:58:40.151577 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-17 00:58:40.151580 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-17 00:58:40.151583 | orchestrator | 2026-03-17 00:58:40.151587 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-17 00:58:40.151590 | orchestrator | Tuesday 17 March 2026 00:53:10 +0000 (0:00:00.566) 0:04:49.002 ********* 2026-03-17 00:58:40.151593 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:58:40.151596 | orchestrator | 2026-03-17 00:58:40.151599 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-17 00:58:40.151602 | orchestrator | Tuesday 17 March 2026 00:53:10 +0000 (0:00:00.468) 0:04:49.471 ********* 2026-03-17 00:58:40.151605 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:40.151608 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:40.151612 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:40.151615 | orchestrator | 2026-03-17 00:58:40.151618 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-17 00:58:40.151621 | orchestrator | Tuesday 17 March 2026 00:53:11 +0000 (0:00:00.582) 0:04:50.053 ********* 2026-03-17 00:58:40.151624 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.151627 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.151630 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.151633 | orchestrator | 2026-03-17 00:58:40.151638 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-17 00:58:40.151644 | orchestrator | Tuesday 17 March 2026 00:53:11 +0000 (0:00:00.452) 0:04:50.505 ********* 2026-03-17 00:58:40.151647 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-17 00:58:40.151650 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-17 00:58:40.151653 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-17 00:58:40.151656 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-17 00:58:40.151660 | orchestrator | 2026-03-17 00:58:40.151663 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-17 00:58:40.151666 | orchestrator | Tuesday 17 March 2026 00:53:20 +0000 (0:00:08.703) 0:04:59.208 ********* 2026-03-17 00:58:40.151669 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.151672 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.151675 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.151678 | orchestrator | 2026-03-17 00:58:40.151681 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-17 00:58:40.151684 | orchestrator | Tuesday 17 March 2026 00:53:20 +0000 (0:00:00.532) 0:04:59.741 ********* 2026-03-17 00:58:40.151687 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-17 00:58:40.151690 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-17 00:58:40.151694 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-17 00:58:40.151697 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-17 00:58:40.151700 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 00:58:40.151703 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 00:58:40.151706 | orchestrator | 2026-03-17 00:58:40.151718 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-17 00:58:40.151721 | orchestrator | Tuesday 17 March 2026 00:53:23 +0000 (0:00:02.395) 0:05:02.136 ********* 2026-03-17 00:58:40.151725 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-17 00:58:40.151728 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-17 00:58:40.151731 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-17 00:58:40.151734 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-17 00:58:40.151737 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-17 00:58:40.151740 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-17 00:58:40.151743 | orchestrator | 2026-03-17 00:58:40.151746 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-17 00:58:40.151749 | orchestrator | Tuesday 17 March 2026 00:53:24 +0000 (0:00:01.276) 0:05:03.413 ********* 2026-03-17 00:58:40.151752 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.151756 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.151759 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.151762 | orchestrator | 2026-03-17 00:58:40.151765 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-17 00:58:40.151768 | orchestrator | Tuesday 17 March 2026 00:53:25 +0000 (0:00:00.998) 0:05:04.411 ********* 2026-03-17 00:58:40.151771 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.151774 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.151777 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.151780 | orchestrator | 2026-03-17 00:58:40.151783 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-17 00:58:40.151787 | orchestrator | Tuesday 17 March 2026 00:53:25 +0000 (0:00:00.328) 0:05:04.740 ********* 2026-03-17 00:58:40.151790 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.151793 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.151796 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.151799 | orchestrator | 2026-03-17 00:58:40.151802 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-17 00:58:40.151805 | orchestrator | Tuesday 17 March 2026 00:53:26 +0000 (0:00:00.318) 0:05:05.059 ********* 2026-03-17 00:58:40.151808 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:58:40.151815 | orchestrator | 2026-03-17 00:58:40.151818 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-17 00:58:40.151821 | orchestrator | Tuesday 17 March 2026 00:53:26 +0000 (0:00:00.824) 0:05:05.883 ********* 2026-03-17 00:58:40.151824 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.151827 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.151830 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.151833 | orchestrator | 2026-03-17 00:58:40.151836 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-17 00:58:40.151839 | orchestrator | Tuesday 17 March 2026 00:53:27 +0000 (0:00:00.324) 0:05:06.208 ********* 2026-03-17 00:58:40.151842 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.151846 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.151849 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.151852 | orchestrator | 2026-03-17 00:58:40.151855 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-17 00:58:40.151858 | orchestrator | Tuesday 17 March 2026 00:53:27 +0000 (0:00:00.311) 0:05:06.519 ********* 2026-03-17 00:58:40.151861 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-1, testbed-node-0, testbed-node-2 2026-03-17 00:58:40.151864 | orchestrator | 2026-03-17 00:58:40.151867 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-17 00:58:40.151870 | orchestrator | Tuesday 17 March 2026 00:53:28 +0000 (0:00:00.793) 0:05:07.313 ********* 2026-03-17 00:58:40.151873 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:40.151876 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:40.151879 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:40.151882 | orchestrator | 2026-03-17 00:58:40.151886 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-17 00:58:40.151889 | orchestrator | Tuesday 17 March 2026 00:53:29 +0000 (0:00:01.269) 0:05:08.582 ********* 2026-03-17 00:58:40.151892 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:40.151895 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:40.151898 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:40.151901 | orchestrator | 2026-03-17 00:58:40.151906 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-17 00:58:40.151909 | orchestrator | Tuesday 17 March 2026 00:53:30 +0000 (0:00:01.287) 0:05:09.869 ********* 2026-03-17 00:58:40.151912 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:40.151915 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:40.151919 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:40.151922 | orchestrator | 2026-03-17 00:58:40.151925 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-17 00:58:40.151928 | orchestrator | Tuesday 17 March 2026 00:53:32 +0000 (0:00:01.964) 0:05:11.834 ********* 2026-03-17 00:58:40.151931 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:40.151958 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:40.151962 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:40.151965 | orchestrator | 2026-03-17 00:58:40.151968 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-17 00:58:40.151971 | orchestrator | Tuesday 17 March 2026 00:53:34 +0000 (0:00:01.974) 0:05:13.808 ********* 2026-03-17 00:58:40.151974 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.151977 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.151980 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-17 00:58:40.151983 | orchestrator | 2026-03-17 00:58:40.151986 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-17 00:58:40.151990 | orchestrator | Tuesday 17 March 2026 00:53:35 +0000 (0:00:00.666) 0:05:14.475 ********* 2026-03-17 00:58:40.151993 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-17 00:58:40.152010 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-17 00:58:40.152013 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-03-17 00:58:40.152016 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-03-17 00:58:40.152020 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-03-17 00:58:40.152023 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-03-17 00:58:40.152026 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-17 00:58:40.152029 | orchestrator | 2026-03-17 00:58:40.152032 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-17 00:58:40.152035 | orchestrator | Tuesday 17 March 2026 00:54:11 +0000 (0:00:36.187) 0:05:50.663 ********* 2026-03-17 00:58:40.152038 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-17 00:58:40.152041 | orchestrator | 2026-03-17 00:58:40.152044 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-17 00:58:40.152047 | orchestrator | Tuesday 17 March 2026 00:54:12 +0000 (0:00:01.247) 0:05:51.911 ********* 2026-03-17 00:58:40.152051 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.152054 | orchestrator | 2026-03-17 00:58:40.152057 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-17 00:58:40.152060 | orchestrator | Tuesday 17 March 2026 00:54:13 +0000 (0:00:00.294) 0:05:52.205 ********* 2026-03-17 00:58:40.152063 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.152066 | orchestrator | 2026-03-17 00:58:40.152069 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-17 00:58:40.152072 | orchestrator | Tuesday 17 March 2026 00:54:13 +0000 (0:00:00.160) 0:05:52.365 ********* 2026-03-17 00:58:40.152075 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-17 00:58:40.152078 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-17 00:58:40.152081 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-17 00:58:40.152084 | orchestrator | 2026-03-17 00:58:40.152087 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-17 00:58:40.152090 | orchestrator | Tuesday 17 March 2026 00:54:19 +0000 (0:00:06.099) 0:05:58.465 ********* 2026-03-17 00:58:40.152093 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-17 00:58:40.152096 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-17 00:58:40.152100 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-17 00:58:40.152103 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-17 00:58:40.152106 | orchestrator | 2026-03-17 00:58:40.152109 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-17 00:58:40.152112 | orchestrator | Tuesday 17 March 2026 00:54:24 +0000 (0:00:04.850) 0:06:03.315 ********* 2026-03-17 00:58:40.152115 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:40.152118 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:40.152121 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:40.152124 | orchestrator | 2026-03-17 00:58:40.152127 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-17 00:58:40.152130 | orchestrator | Tuesday 17 March 2026 00:54:24 +0000 (0:00:00.610) 0:06:03.926 ********* 2026-03-17 00:58:40.152133 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:58:40.152136 | orchestrator | 2026-03-17 00:58:40.152140 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-17 00:58:40.152143 | orchestrator | Tuesday 17 March 2026 00:54:25 +0000 (0:00:00.613) 0:06:04.540 ********* 2026-03-17 00:58:40.152148 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.152151 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.152154 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.152157 | orchestrator | 2026-03-17 00:58:40.152160 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-17 00:58:40.152164 | orchestrator | Tuesday 17 March 2026 00:54:25 +0000 (0:00:00.278) 0:06:04.818 ********* 2026-03-17 00:58:40.152167 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:40.152170 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:40.152173 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:40.152176 | orchestrator | 2026-03-17 00:58:40.152180 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-17 00:58:40.152183 | orchestrator | Tuesday 17 March 2026 00:54:27 +0000 (0:00:01.353) 0:06:06.171 ********* 2026-03-17 00:58:40.152188 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-17 00:58:40.152193 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-17 00:58:40.152198 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-17 00:58:40.152202 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.152207 | orchestrator | 2026-03-17 00:58:40.152212 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-17 00:58:40.152217 | orchestrator | Tuesday 17 March 2026 00:54:27 +0000 (0:00:00.545) 0:06:06.716 ********* 2026-03-17 00:58:40.152222 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.152226 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.152231 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.152236 | orchestrator | 2026-03-17 00:58:40.152241 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-17 00:58:40.152246 | orchestrator | 2026-03-17 00:58:40.152252 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-17 00:58:40.152257 | orchestrator | Tuesday 17 March 2026 00:54:28 +0000 (0:00:00.631) 0:06:07.347 ********* 2026-03-17 00:58:40.152278 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:58:40.152283 | orchestrator | 2026-03-17 00:58:40.152286 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-17 00:58:40.152289 | orchestrator | Tuesday 17 March 2026 00:54:28 +0000 (0:00:00.447) 0:06:07.795 ********* 2026-03-17 00:58:40.152293 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:58:40.152299 | orchestrator | 2026-03-17 00:58:40.152304 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-17 00:58:40.152309 | orchestrator | Tuesday 17 March 2026 00:54:29 +0000 (0:00:00.593) 0:06:08.388 ********* 2026-03-17 00:58:40.152314 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.152319 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.152324 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.152330 | orchestrator | 2026-03-17 00:58:40.152356 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-17 00:58:40.152360 | orchestrator | Tuesday 17 March 2026 00:54:29 +0000 (0:00:00.261) 0:06:08.650 ********* 2026-03-17 00:58:40.152363 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.152366 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.152369 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.152372 | orchestrator | 2026-03-17 00:58:40.152376 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-17 00:58:40.152379 | orchestrator | Tuesday 17 March 2026 00:54:30 +0000 (0:00:00.693) 0:06:09.343 ********* 2026-03-17 00:58:40.152384 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.152389 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.152394 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.152399 | orchestrator | 2026-03-17 00:58:40.152404 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-17 00:58:40.152414 | orchestrator | Tuesday 17 March 2026 00:54:31 +0000 (0:00:00.726) 0:06:10.070 ********* 2026-03-17 00:58:40.152418 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.152422 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.152427 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.152431 | orchestrator | 2026-03-17 00:58:40.152436 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-17 00:58:40.152441 | orchestrator | Tuesday 17 March 2026 00:54:31 +0000 (0:00:00.801) 0:06:10.871 ********* 2026-03-17 00:58:40.152446 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.152450 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.152455 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.152459 | orchestrator | 2026-03-17 00:58:40.152464 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-17 00:58:40.152469 | orchestrator | Tuesday 17 March 2026 00:54:32 +0000 (0:00:00.276) 0:06:11.147 ********* 2026-03-17 00:58:40.152473 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.152478 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.152482 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.152487 | orchestrator | 2026-03-17 00:58:40.152492 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-17 00:58:40.152498 | orchestrator | Tuesday 17 March 2026 00:54:32 +0000 (0:00:00.261) 0:06:11.409 ********* 2026-03-17 00:58:40.152503 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.152507 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.152512 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.152517 | orchestrator | 2026-03-17 00:58:40.152522 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-17 00:58:40.152527 | orchestrator | Tuesday 17 March 2026 00:54:32 +0000 (0:00:00.252) 0:06:11.661 ********* 2026-03-17 00:58:40.152532 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.152537 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.152541 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.152546 | orchestrator | 2026-03-17 00:58:40.152551 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-17 00:58:40.152556 | orchestrator | Tuesday 17 March 2026 00:54:33 +0000 (0:00:00.896) 0:06:12.558 ********* 2026-03-17 00:58:40.152561 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.152566 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.152571 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.152576 | orchestrator | 2026-03-17 00:58:40.152580 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-17 00:58:40.152589 | orchestrator | Tuesday 17 March 2026 00:54:34 +0000 (0:00:00.697) 0:06:13.255 ********* 2026-03-17 00:58:40.152594 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.152599 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.152604 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.152609 | orchestrator | 2026-03-17 00:58:40.152614 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-17 00:58:40.152619 | orchestrator | Tuesday 17 March 2026 00:54:34 +0000 (0:00:00.250) 0:06:13.506 ********* 2026-03-17 00:58:40.152623 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.152628 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.152633 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.152637 | orchestrator | 2026-03-17 00:58:40.152641 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-17 00:58:40.152646 | orchestrator | Tuesday 17 March 2026 00:54:34 +0000 (0:00:00.255) 0:06:13.762 ********* 2026-03-17 00:58:40.152651 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.152655 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.152660 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.152664 | orchestrator | 2026-03-17 00:58:40.152668 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-17 00:58:40.152673 | orchestrator | Tuesday 17 March 2026 00:54:35 +0000 (0:00:00.437) 0:06:14.200 ********* 2026-03-17 00:58:40.152682 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.152687 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.152691 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.152696 | orchestrator | 2026-03-17 00:58:40.152700 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-17 00:58:40.152705 | orchestrator | Tuesday 17 March 2026 00:54:35 +0000 (0:00:00.290) 0:06:14.490 ********* 2026-03-17 00:58:40.152709 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.152715 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.152725 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.152730 | orchestrator | 2026-03-17 00:58:40.152734 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-17 00:58:40.152738 | orchestrator | Tuesday 17 March 2026 00:54:35 +0000 (0:00:00.281) 0:06:14.772 ********* 2026-03-17 00:58:40.152743 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.152748 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.152752 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.152757 | orchestrator | 2026-03-17 00:58:40.152762 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-17 00:58:40.152766 | orchestrator | Tuesday 17 March 2026 00:54:36 +0000 (0:00:00.270) 0:06:15.042 ********* 2026-03-17 00:58:40.152771 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.152775 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.152780 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.152785 | orchestrator | 2026-03-17 00:58:40.152790 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-17 00:58:40.152795 | orchestrator | Tuesday 17 March 2026 00:54:36 +0000 (0:00:00.436) 0:06:15.479 ********* 2026-03-17 00:58:40.152800 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.152805 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.152810 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.152815 | orchestrator | 2026-03-17 00:58:40.152819 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-17 00:58:40.152823 | orchestrator | Tuesday 17 March 2026 00:54:36 +0000 (0:00:00.261) 0:06:15.740 ********* 2026-03-17 00:58:40.152828 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.152832 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.152837 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.152841 | orchestrator | 2026-03-17 00:58:40.152846 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-17 00:58:40.152850 | orchestrator | Tuesday 17 March 2026 00:54:37 +0000 (0:00:00.282) 0:06:16.023 ********* 2026-03-17 00:58:40.152855 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.152860 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.152864 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.152869 | orchestrator | 2026-03-17 00:58:40.152873 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-17 00:58:40.152878 | orchestrator | Tuesday 17 March 2026 00:54:37 +0000 (0:00:00.460) 0:06:16.484 ********* 2026-03-17 00:58:40.152883 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.152888 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.152893 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.152898 | orchestrator | 2026-03-17 00:58:40.152903 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-17 00:58:40.152908 | orchestrator | Tuesday 17 March 2026 00:54:37 +0000 (0:00:00.457) 0:06:16.942 ********* 2026-03-17 00:58:40.152913 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-17 00:58:40.152918 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-17 00:58:40.152923 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-17 00:58:40.152929 | orchestrator | 2026-03-17 00:58:40.152944 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-17 00:58:40.152950 | orchestrator | Tuesday 17 March 2026 00:54:38 +0000 (0:00:00.554) 0:06:17.496 ********* 2026-03-17 00:58:40.152960 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:58:40.152966 | orchestrator | 2026-03-17 00:58:40.152971 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-17 00:58:40.152977 | orchestrator | Tuesday 17 March 2026 00:54:38 +0000 (0:00:00.436) 0:06:17.932 ********* 2026-03-17 00:58:40.152982 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.152987 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.152992 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.152997 | orchestrator | 2026-03-17 00:58:40.153002 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-17 00:58:40.153007 | orchestrator | Tuesday 17 March 2026 00:54:39 +0000 (0:00:00.392) 0:06:18.325 ********* 2026-03-17 00:58:40.153012 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.153017 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.153026 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.153031 | orchestrator | 2026-03-17 00:58:40.153036 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-17 00:58:40.153041 | orchestrator | Tuesday 17 March 2026 00:54:39 +0000 (0:00:00.263) 0:06:18.588 ********* 2026-03-17 00:58:40.153046 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.153051 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.153056 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.153061 | orchestrator | 2026-03-17 00:58:40.153066 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-17 00:58:40.153072 | orchestrator | Tuesday 17 March 2026 00:54:40 +0000 (0:00:00.585) 0:06:19.173 ********* 2026-03-17 00:58:40.153076 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.153082 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.153087 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.153092 | orchestrator | 2026-03-17 00:58:40.153097 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-17 00:58:40.153101 | orchestrator | Tuesday 17 March 2026 00:54:40 +0000 (0:00:00.266) 0:06:19.440 ********* 2026-03-17 00:58:40.153107 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-17 00:58:40.153113 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-17 00:58:40.153118 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-17 00:58:40.153123 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-17 00:58:40.153128 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-17 00:58:40.153140 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-17 00:58:40.153145 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-17 00:58:40.153150 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-17 00:58:40.153156 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-17 00:58:40.153161 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-17 00:58:40.153166 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-17 00:58:40.153171 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-17 00:58:40.153176 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-17 00:58:40.153181 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-17 00:58:40.153187 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-17 00:58:40.153193 | orchestrator | 2026-03-17 00:58:40.153203 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-17 00:58:40.153208 | orchestrator | Tuesday 17 March 2026 00:54:45 +0000 (0:00:04.538) 0:06:23.978 ********* 2026-03-17 00:58:40.153213 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.153216 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.153219 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.153222 | orchestrator | 2026-03-17 00:58:40.153225 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-17 00:58:40.153228 | orchestrator | Tuesday 17 March 2026 00:54:45 +0000 (0:00:00.308) 0:06:24.286 ********* 2026-03-17 00:58:40.153232 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:58:40.153235 | orchestrator | 2026-03-17 00:58:40.153238 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-17 00:58:40.153241 | orchestrator | Tuesday 17 March 2026 00:54:45 +0000 (0:00:00.499) 0:06:24.786 ********* 2026-03-17 00:58:40.153244 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-17 00:58:40.153247 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-17 00:58:40.153250 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-17 00:58:40.153253 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-17 00:58:40.153257 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-17 00:58:40.153260 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-17 00:58:40.153263 | orchestrator | 2026-03-17 00:58:40.153266 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-17 00:58:40.153269 | orchestrator | Tuesday 17 March 2026 00:54:47 +0000 (0:00:01.314) 0:06:26.100 ********* 2026-03-17 00:58:40.153272 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 00:58:40.153275 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-17 00:58:40.153278 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-17 00:58:40.153281 | orchestrator | 2026-03-17 00:58:40.153284 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-17 00:58:40.153287 | orchestrator | Tuesday 17 March 2026 00:54:49 +0000 (0:00:02.203) 0:06:28.304 ********* 2026-03-17 00:58:40.153290 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-17 00:58:40.153294 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-17 00:58:40.153297 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:58:40.153300 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-17 00:58:40.153303 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-17 00:58:40.153306 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:58:40.153309 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-17 00:58:40.153314 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-17 00:58:40.153317 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:58:40.153321 | orchestrator | 2026-03-17 00:58:40.153324 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-17 00:58:40.153327 | orchestrator | Tuesday 17 March 2026 00:54:50 +0000 (0:00:01.325) 0:06:29.630 ********* 2026-03-17 00:58:40.153330 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-17 00:58:40.153333 | orchestrator | 2026-03-17 00:58:40.153336 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-17 00:58:40.153339 | orchestrator | Tuesday 17 March 2026 00:54:52 +0000 (0:00:02.082) 0:06:31.712 ********* 2026-03-17 00:58:40.153342 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:58:40.153345 | orchestrator | 2026-03-17 00:58:40.153348 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-17 00:58:40.153351 | orchestrator | Tuesday 17 March 2026 00:54:53 +0000 (0:00:00.466) 0:06:32.179 ********* 2026-03-17 00:58:40.153357 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b48309d9-c226-530e-bc23-6e205cf9651b', 'data_vg': 'ceph-b48309d9-c226-530e-bc23-6e205cf9651b'}) 2026-03-17 00:58:40.153361 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0', 'data_vg': 'ceph-6d2c3af9-2510-58af-8cf3-0edda6a2b7a0'}) 2026-03-17 00:58:40.153367 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-13f697f5-12ba-5526-98d1-b1a9c265f800', 'data_vg': 'ceph-13f697f5-12ba-5526-98d1-b1a9c265f800'}) 2026-03-17 00:58:40.153370 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a0cc3c10-edeb-5a7b-849a-4273befffbf6', 'data_vg': 'ceph-a0cc3c10-edeb-5a7b-849a-4273befffbf6'}) 2026-03-17 00:58:40.153373 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f', 'data_vg': 'ceph-6efa8bf7-29bf-52cd-bcf0-0c94ef95f07f'}) 2026-03-17 00:58:40.153376 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-bc85b6b7-69fe-55db-81a6-3a78775dfc6c', 'data_vg': 'ceph-bc85b6b7-69fe-55db-81a6-3a78775dfc6c'}) 2026-03-17 00:58:40.153379 | orchestrator | 2026-03-17 00:58:40.153382 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-17 00:58:40.153385 | orchestrator | Tuesday 17 March 2026 00:55:30 +0000 (0:00:37.720) 0:07:09.899 ********* 2026-03-17 00:58:40.153388 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.153391 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.153394 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.153398 | orchestrator | 2026-03-17 00:58:40.153401 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-17 00:58:40.153404 | orchestrator | Tuesday 17 March 2026 00:55:31 +0000 (0:00:00.264) 0:07:10.164 ********* 2026-03-17 00:58:40.153407 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:58:40.153410 | orchestrator | 2026-03-17 00:58:40.153413 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-17 00:58:40.153416 | orchestrator | Tuesday 17 March 2026 00:55:31 +0000 (0:00:00.441) 0:07:10.606 ********* 2026-03-17 00:58:40.153419 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.153422 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.153425 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.153428 | orchestrator | 2026-03-17 00:58:40.153431 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-17 00:58:40.153434 | orchestrator | Tuesday 17 March 2026 00:55:32 +0000 (0:00:00.886) 0:07:11.493 ********* 2026-03-17 00:58:40.153437 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.153440 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.153444 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.153447 | orchestrator | 2026-03-17 00:58:40.153450 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-17 00:58:40.153453 | orchestrator | Tuesday 17 March 2026 00:55:35 +0000 (0:00:02.786) 0:07:14.279 ********* 2026-03-17 00:58:40.153456 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:58:40.153459 | orchestrator | 2026-03-17 00:58:40.153462 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-17 00:58:40.153465 | orchestrator | Tuesday 17 March 2026 00:55:35 +0000 (0:00:00.443) 0:07:14.722 ********* 2026-03-17 00:58:40.153468 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:58:40.153471 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:58:40.153474 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:58:40.153477 | orchestrator | 2026-03-17 00:58:40.153480 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-17 00:58:40.153483 | orchestrator | Tuesday 17 March 2026 00:55:37 +0000 (0:00:01.524) 0:07:16.247 ********* 2026-03-17 00:58:40.153486 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:58:40.153490 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:58:40.153495 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:58:40.153498 | orchestrator | 2026-03-17 00:58:40.153501 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-17 00:58:40.153504 | orchestrator | Tuesday 17 March 2026 00:55:38 +0000 (0:00:01.165) 0:07:17.412 ********* 2026-03-17 00:58:40.153507 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:58:40.153510 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:58:40.153513 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:58:40.153516 | orchestrator | 2026-03-17 00:58:40.153519 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-17 00:58:40.153522 | orchestrator | Tuesday 17 March 2026 00:55:40 +0000 (0:00:01.697) 0:07:19.110 ********* 2026-03-17 00:58:40.153527 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.153530 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.153533 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.153536 | orchestrator | 2026-03-17 00:58:40.153539 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-17 00:58:40.153542 | orchestrator | Tuesday 17 March 2026 00:55:40 +0000 (0:00:00.319) 0:07:19.429 ********* 2026-03-17 00:58:40.153545 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.153549 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.153552 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.153555 | orchestrator | 2026-03-17 00:58:40.153558 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-17 00:58:40.153561 | orchestrator | Tuesday 17 March 2026 00:55:41 +0000 (0:00:00.567) 0:07:19.997 ********* 2026-03-17 00:58:40.153564 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-17 00:58:40.153567 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-03-17 00:58:40.153570 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-03-17 00:58:40.153573 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-03-17 00:58:40.153576 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-03-17 00:58:40.153579 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-03-17 00:58:40.153582 | orchestrator | 2026-03-17 00:58:40.153585 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-17 00:58:40.153588 | orchestrator | Tuesday 17 March 2026 00:55:42 +0000 (0:00:01.167) 0:07:21.165 ********* 2026-03-17 00:58:40.153591 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-17 00:58:40.153594 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-03-17 00:58:40.153597 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-17 00:58:40.153601 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-03-17 00:58:40.153604 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-17 00:58:40.153609 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-03-17 00:58:40.153612 | orchestrator | 2026-03-17 00:58:40.153615 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-17 00:58:40.153618 | orchestrator | Tuesday 17 March 2026 00:55:44 +0000 (0:00:02.289) 0:07:23.454 ********* 2026-03-17 00:58:40.153621 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-17 00:58:40.153624 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-03-17 00:58:40.153627 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-17 00:58:40.153630 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-17 00:58:40.153633 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-03-17 00:58:40.153637 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-03-17 00:58:40.153640 | orchestrator | 2026-03-17 00:58:40.153643 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-17 00:58:40.153646 | orchestrator | Tuesday 17 March 2026 00:55:48 +0000 (0:00:04.003) 0:07:27.458 ********* 2026-03-17 00:58:40.153649 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.153652 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.153655 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-17 00:58:40.153658 | orchestrator | 2026-03-17 00:58:40.153661 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-17 00:58:40.153668 | orchestrator | Tuesday 17 March 2026 00:55:51 +0000 (0:00:02.888) 0:07:30.347 ********* 2026-03-17 00:58:40.153671 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.153674 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.153678 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-17 00:58:40.153681 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-17 00:58:40.153684 | orchestrator | 2026-03-17 00:58:40.153687 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-17 00:58:40.153690 | orchestrator | Tuesday 17 March 2026 00:56:03 +0000 (0:00:12.505) 0:07:42.852 ********* 2026-03-17 00:58:40.153693 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.153696 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.153699 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.153702 | orchestrator | 2026-03-17 00:58:40.153705 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-17 00:58:40.153708 | orchestrator | Tuesday 17 March 2026 00:56:04 +0000 (0:00:00.838) 0:07:43.690 ********* 2026-03-17 00:58:40.153711 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.153714 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.153717 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.153721 | orchestrator | 2026-03-17 00:58:40.153724 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-17 00:58:40.153727 | orchestrator | Tuesday 17 March 2026 00:56:05 +0000 (0:00:00.295) 0:07:43.986 ********* 2026-03-17 00:58:40.153730 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:58:40.153733 | orchestrator | 2026-03-17 00:58:40.153736 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-17 00:58:40.153739 | orchestrator | Tuesday 17 March 2026 00:56:05 +0000 (0:00:00.486) 0:07:44.473 ********* 2026-03-17 00:58:40.153742 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 00:58:40.153745 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 00:58:40.153748 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 00:58:40.153751 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.153754 | orchestrator | 2026-03-17 00:58:40.153757 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-17 00:58:40.153760 | orchestrator | Tuesday 17 March 2026 00:56:06 +0000 (0:00:00.860) 0:07:45.333 ********* 2026-03-17 00:58:40.153763 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.153766 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.153769 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.153772 | orchestrator | 2026-03-17 00:58:40.153776 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-17 00:58:40.153780 | orchestrator | Tuesday 17 March 2026 00:56:06 +0000 (0:00:00.308) 0:07:45.641 ********* 2026-03-17 00:58:40.153783 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.153786 | orchestrator | 2026-03-17 00:58:40.153789 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-17 00:58:40.153792 | orchestrator | Tuesday 17 March 2026 00:56:06 +0000 (0:00:00.216) 0:07:45.858 ********* 2026-03-17 00:58:40.153795 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.153799 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.153802 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.153805 | orchestrator | 2026-03-17 00:58:40.153808 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-17 00:58:40.153811 | orchestrator | Tuesday 17 March 2026 00:56:07 +0000 (0:00:00.322) 0:07:46.181 ********* 2026-03-17 00:58:40.153814 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.153817 | orchestrator | 2026-03-17 00:58:40.153820 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-17 00:58:40.153825 | orchestrator | Tuesday 17 March 2026 00:56:07 +0000 (0:00:00.210) 0:07:46.391 ********* 2026-03-17 00:58:40.153828 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.153831 | orchestrator | 2026-03-17 00:58:40.153834 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-17 00:58:40.153837 | orchestrator | Tuesday 17 March 2026 00:56:07 +0000 (0:00:00.213) 0:07:46.605 ********* 2026-03-17 00:58:40.153840 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.153843 | orchestrator | 2026-03-17 00:58:40.153846 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-17 00:58:40.153850 | orchestrator | Tuesday 17 March 2026 00:56:07 +0000 (0:00:00.110) 0:07:46.715 ********* 2026-03-17 00:58:40.153853 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.153856 | orchestrator | 2026-03-17 00:58:40.153860 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-17 00:58:40.153864 | orchestrator | Tuesday 17 March 2026 00:56:07 +0000 (0:00:00.231) 0:07:46.947 ********* 2026-03-17 00:58:40.153867 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.153870 | orchestrator | 2026-03-17 00:58:40.153873 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-17 00:58:40.153876 | orchestrator | Tuesday 17 March 2026 00:56:08 +0000 (0:00:00.749) 0:07:47.697 ********* 2026-03-17 00:58:40.153879 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 00:58:40.153882 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 00:58:40.153885 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 00:58:40.153888 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.153891 | orchestrator | 2026-03-17 00:58:40.153894 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-17 00:58:40.153897 | orchestrator | Tuesday 17 March 2026 00:56:09 +0000 (0:00:00.398) 0:07:48.095 ********* 2026-03-17 00:58:40.153900 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.153903 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.153906 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.153909 | orchestrator | 2026-03-17 00:58:40.153912 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-17 00:58:40.153916 | orchestrator | Tuesday 17 March 2026 00:56:09 +0000 (0:00:00.305) 0:07:48.400 ********* 2026-03-17 00:58:40.153919 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.153922 | orchestrator | 2026-03-17 00:58:40.153925 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-17 00:58:40.153928 | orchestrator | Tuesday 17 March 2026 00:56:09 +0000 (0:00:00.209) 0:07:48.610 ********* 2026-03-17 00:58:40.153931 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.153959 | orchestrator | 2026-03-17 00:58:40.153962 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-17 00:58:40.153965 | orchestrator | 2026-03-17 00:58:40.153968 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-17 00:58:40.153971 | orchestrator | Tuesday 17 March 2026 00:56:10 +0000 (0:00:00.659) 0:07:49.270 ********* 2026-03-17 00:58:40.153975 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:58:40.153979 | orchestrator | 2026-03-17 00:58:40.153982 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-17 00:58:40.153985 | orchestrator | Tuesday 17 March 2026 00:56:11 +0000 (0:00:01.213) 0:07:50.483 ********* 2026-03-17 00:58:40.153988 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-5, testbed-node-4, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:58:40.153991 | orchestrator | 2026-03-17 00:58:40.153994 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-17 00:58:40.153997 | orchestrator | Tuesday 17 March 2026 00:56:12 +0000 (0:00:01.258) 0:07:51.742 ********* 2026-03-17 00:58:40.154003 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.154006 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.154010 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.154033 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.154037 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.154040 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.154043 | orchestrator | 2026-03-17 00:58:40.154047 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-17 00:58:40.154050 | orchestrator | Tuesday 17 March 2026 00:56:14 +0000 (0:00:01.231) 0:07:52.973 ********* 2026-03-17 00:58:40.154053 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.154056 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.154059 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.154062 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.154065 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.154068 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.154071 | orchestrator | 2026-03-17 00:58:40.154074 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-17 00:58:40.154077 | orchestrator | Tuesday 17 March 2026 00:56:14 +0000 (0:00:00.761) 0:07:53.734 ********* 2026-03-17 00:58:40.154082 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.154086 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.154089 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.154092 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.154095 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.154098 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.154101 | orchestrator | 2026-03-17 00:58:40.154104 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-17 00:58:40.154107 | orchestrator | Tuesday 17 March 2026 00:56:15 +0000 (0:00:01.082) 0:07:54.816 ********* 2026-03-17 00:58:40.154110 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.154113 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.154117 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.154120 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.154123 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.154126 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.154129 | orchestrator | 2026-03-17 00:58:40.154132 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-17 00:58:40.154135 | orchestrator | Tuesday 17 March 2026 00:56:16 +0000 (0:00:00.749) 0:07:55.566 ********* 2026-03-17 00:58:40.154138 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.154141 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.154144 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.154147 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.154150 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.154153 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.154157 | orchestrator | 2026-03-17 00:58:40.154160 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-17 00:58:40.154163 | orchestrator | Tuesday 17 March 2026 00:56:17 +0000 (0:00:01.212) 0:07:56.779 ********* 2026-03-17 00:58:40.154166 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.154169 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.154175 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.154178 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.154181 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.154184 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.154187 | orchestrator | 2026-03-17 00:58:40.154190 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-17 00:58:40.154194 | orchestrator | Tuesday 17 March 2026 00:56:18 +0000 (0:00:00.524) 0:07:57.303 ********* 2026-03-17 00:58:40.154197 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.154200 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.154203 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.154206 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.154211 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.154214 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.154217 | orchestrator | 2026-03-17 00:58:40.154221 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-17 00:58:40.154224 | orchestrator | Tuesday 17 March 2026 00:56:19 +0000 (0:00:00.666) 0:07:57.970 ********* 2026-03-17 00:58:40.154227 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.154230 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.154233 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.154236 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.154239 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.154242 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.154245 | orchestrator | 2026-03-17 00:58:40.154248 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-17 00:58:40.154251 | orchestrator | Tuesday 17 March 2026 00:56:20 +0000 (0:00:00.988) 0:07:58.958 ********* 2026-03-17 00:58:40.154254 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.154258 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.154261 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.154264 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.154267 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.154270 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.154273 | orchestrator | 2026-03-17 00:58:40.154276 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-17 00:58:40.154279 | orchestrator | Tuesday 17 March 2026 00:56:21 +0000 (0:00:01.123) 0:08:00.082 ********* 2026-03-17 00:58:40.154282 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.154285 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.154288 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.154291 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.154294 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.154297 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.154300 | orchestrator | 2026-03-17 00:58:40.154303 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-17 00:58:40.154307 | orchestrator | Tuesday 17 March 2026 00:56:21 +0000 (0:00:00.506) 0:08:00.589 ********* 2026-03-17 00:58:40.154310 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.154313 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.154316 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.154319 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.154322 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.154325 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.154328 | orchestrator | 2026-03-17 00:58:40.154331 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-17 00:58:40.154334 | orchestrator | Tuesday 17 March 2026 00:56:22 +0000 (0:00:00.663) 0:08:01.252 ********* 2026-03-17 00:58:40.154337 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.154340 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.154344 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.154347 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.154350 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.154353 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.154356 | orchestrator | 2026-03-17 00:58:40.154359 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-17 00:58:40.154362 | orchestrator | Tuesday 17 March 2026 00:56:22 +0000 (0:00:00.509) 0:08:01.761 ********* 2026-03-17 00:58:40.154365 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.154368 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.154371 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.154374 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.154377 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.154380 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.154384 | orchestrator | 2026-03-17 00:58:40.154387 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-17 00:58:40.154393 | orchestrator | Tuesday 17 March 2026 00:56:23 +0000 (0:00:00.634) 0:08:02.396 ********* 2026-03-17 00:58:40.154397 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.154400 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.154403 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.154406 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.154409 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.154412 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.154415 | orchestrator | 2026-03-17 00:58:40.154418 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-17 00:58:40.154421 | orchestrator | Tuesday 17 March 2026 00:56:23 +0000 (0:00:00.491) 0:08:02.887 ********* 2026-03-17 00:58:40.154424 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.154427 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.154431 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.154434 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.154437 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.154440 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.154443 | orchestrator | 2026-03-17 00:58:40.154446 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-17 00:58:40.154449 | orchestrator | Tuesday 17 March 2026 00:56:24 +0000 (0:00:00.658) 0:08:03.545 ********* 2026-03-17 00:58:40.154452 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.154455 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.154458 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.154461 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:40.154464 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:40.154467 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:40.154470 | orchestrator | 2026-03-17 00:58:40.154473 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-17 00:58:40.154477 | orchestrator | Tuesday 17 March 2026 00:56:25 +0000 (0:00:00.508) 0:08:04.054 ********* 2026-03-17 00:58:40.154480 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.154484 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.154488 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.154491 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.154494 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.154497 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.154500 | orchestrator | 2026-03-17 00:58:40.154503 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-17 00:58:40.154506 | orchestrator | Tuesday 17 March 2026 00:56:25 +0000 (0:00:00.663) 0:08:04.717 ********* 2026-03-17 00:58:40.154509 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.154512 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.154515 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.154519 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.154522 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.154525 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.154528 | orchestrator | 2026-03-17 00:58:40.154531 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-17 00:58:40.154534 | orchestrator | Tuesday 17 March 2026 00:56:26 +0000 (0:00:00.539) 0:08:05.257 ********* 2026-03-17 00:58:40.154537 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.154540 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.154543 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.154546 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.154549 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.154552 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.154555 | orchestrator | 2026-03-17 00:58:40.154558 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-17 00:58:40.154561 | orchestrator | Tuesday 17 March 2026 00:56:27 +0000 (0:00:01.074) 0:08:06.332 ********* 2026-03-17 00:58:40.154565 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-17 00:58:40.154568 | orchestrator | 2026-03-17 00:58:40.154571 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-17 00:58:40.154576 | orchestrator | Tuesday 17 March 2026 00:56:31 +0000 (0:00:04.516) 0:08:10.848 ********* 2026-03-17 00:58:40.154579 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-17 00:58:40.154582 | orchestrator | 2026-03-17 00:58:40.154585 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-17 00:58:40.154588 | orchestrator | Tuesday 17 March 2026 00:56:33 +0000 (0:00:02.080) 0:08:12.929 ********* 2026-03-17 00:58:40.154592 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:58:40.154595 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:58:40.154598 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:58:40.154601 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.154604 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:40.154607 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:40.154610 | orchestrator | 2026-03-17 00:58:40.154613 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-17 00:58:40.154616 | orchestrator | Tuesday 17 March 2026 00:56:35 +0000 (0:00:01.583) 0:08:14.512 ********* 2026-03-17 00:58:40.154620 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:58:40.154623 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:58:40.154626 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:58:40.154629 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:40.154632 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:40.154635 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:40.154638 | orchestrator | 2026-03-17 00:58:40.154641 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-17 00:58:40.154644 | orchestrator | Tuesday 17 March 2026 00:56:36 +0000 (0:00:00.871) 0:08:15.384 ********* 2026-03-17 00:58:40.154647 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:58:40.154651 | orchestrator | 2026-03-17 00:58:40.154654 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-17 00:58:40.154657 | orchestrator | Tuesday 17 March 2026 00:56:37 +0000 (0:00:01.010) 0:08:16.395 ********* 2026-03-17 00:58:40.154660 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:58:40.154663 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:58:40.154667 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:58:40.154670 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:40.154673 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:40.154676 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:40.154679 | orchestrator | 2026-03-17 00:58:40.154684 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-17 00:58:40.154687 | orchestrator | Tuesday 17 March 2026 00:56:39 +0000 (0:00:01.634) 0:08:18.029 ********* 2026-03-17 00:58:40.154690 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:58:40.154693 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:58:40.154696 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:58:40.154699 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:40.154702 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:40.154705 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:40.154708 | orchestrator | 2026-03-17 00:58:40.154711 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-17 00:58:40.154714 | orchestrator | Tuesday 17 March 2026 00:56:42 +0000 (0:00:03.227) 0:08:21.256 ********* 2026-03-17 00:58:40.154718 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:58:40.154721 | orchestrator | 2026-03-17 00:58:40.154724 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-17 00:58:40.154727 | orchestrator | Tuesday 17 March 2026 00:56:43 +0000 (0:00:01.110) 0:08:22.367 ********* 2026-03-17 00:58:40.154730 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.154733 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.154738 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.154742 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.154745 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.154748 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.154751 | orchestrator | 2026-03-17 00:58:40.154754 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-17 00:58:40.154757 | orchestrator | Tuesday 17 March 2026 00:56:44 +0000 (0:00:00.663) 0:08:23.030 ********* 2026-03-17 00:58:40.154760 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:58:40.154765 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:58:40.154768 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:58:40.154771 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:40.154776 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:40.154781 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:40.154786 | orchestrator | 2026-03-17 00:58:40.154791 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-17 00:58:40.154796 | orchestrator | Tuesday 17 March 2026 00:56:46 +0000 (0:00:02.297) 0:08:25.328 ********* 2026-03-17 00:58:40.154801 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.154804 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.154807 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.154810 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:40.154815 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:40.154820 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:40.154825 | orchestrator | 2026-03-17 00:58:40.154829 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-17 00:58:40.154834 | orchestrator | 2026-03-17 00:58:40.154839 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-17 00:58:40.154844 | orchestrator | Tuesday 17 March 2026 00:56:47 +0000 (0:00:00.841) 0:08:26.169 ********* 2026-03-17 00:58:40.154850 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:58:40.154855 | orchestrator | 2026-03-17 00:58:40.154860 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-17 00:58:40.154866 | orchestrator | Tuesday 17 March 2026 00:56:47 +0000 (0:00:00.461) 0:08:26.631 ********* 2026-03-17 00:58:40.154870 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:58:40.154876 | orchestrator | 2026-03-17 00:58:40.154881 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-17 00:58:40.154884 | orchestrator | Tuesday 17 March 2026 00:56:48 +0000 (0:00:00.798) 0:08:27.430 ********* 2026-03-17 00:58:40.154887 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.154890 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.154893 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.154897 | orchestrator | 2026-03-17 00:58:40.154900 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-17 00:58:40.154904 | orchestrator | Tuesday 17 March 2026 00:56:48 +0000 (0:00:00.342) 0:08:27.772 ********* 2026-03-17 00:58:40.154909 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.154914 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.154919 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.154923 | orchestrator | 2026-03-17 00:58:40.154928 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-17 00:58:40.154947 | orchestrator | Tuesday 17 March 2026 00:56:49 +0000 (0:00:00.907) 0:08:28.680 ********* 2026-03-17 00:58:40.154952 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.154957 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.154962 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.154967 | orchestrator | 2026-03-17 00:58:40.154973 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-17 00:58:40.154978 | orchestrator | Tuesday 17 March 2026 00:56:50 +0000 (0:00:01.034) 0:08:29.715 ********* 2026-03-17 00:58:40.154983 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.154993 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.154998 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.155003 | orchestrator | 2026-03-17 00:58:40.155008 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-17 00:58:40.155014 | orchestrator | Tuesday 17 March 2026 00:56:51 +0000 (0:00:00.715) 0:08:30.431 ********* 2026-03-17 00:58:40.155019 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.155024 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.155030 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.155035 | orchestrator | 2026-03-17 00:58:40.155040 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-17 00:58:40.155045 | orchestrator | Tuesday 17 March 2026 00:56:51 +0000 (0:00:00.265) 0:08:30.697 ********* 2026-03-17 00:58:40.155050 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.155055 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.155061 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.155066 | orchestrator | 2026-03-17 00:58:40.155075 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-17 00:58:40.155079 | orchestrator | Tuesday 17 March 2026 00:56:52 +0000 (0:00:00.296) 0:08:30.993 ********* 2026-03-17 00:58:40.155082 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.155085 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.155088 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.155091 | orchestrator | 2026-03-17 00:58:40.155094 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-17 00:58:40.155097 | orchestrator | Tuesday 17 March 2026 00:56:52 +0000 (0:00:00.489) 0:08:31.483 ********* 2026-03-17 00:58:40.155100 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.155103 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.155107 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.155112 | orchestrator | 2026-03-17 00:58:40.155117 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-17 00:58:40.155122 | orchestrator | Tuesday 17 March 2026 00:56:53 +0000 (0:00:00.725) 0:08:32.208 ********* 2026-03-17 00:58:40.155127 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.155133 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.155138 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.155143 | orchestrator | 2026-03-17 00:58:40.155148 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-17 00:58:40.155153 | orchestrator | Tuesday 17 March 2026 00:56:54 +0000 (0:00:00.778) 0:08:32.987 ********* 2026-03-17 00:58:40.155158 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.155163 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.155168 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.155174 | orchestrator | 2026-03-17 00:58:40.155177 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-17 00:58:40.155180 | orchestrator | Tuesday 17 March 2026 00:56:54 +0000 (0:00:00.262) 0:08:33.249 ********* 2026-03-17 00:58:40.155183 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.155191 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.155194 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.155197 | orchestrator | 2026-03-17 00:58:40.155201 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-17 00:58:40.155204 | orchestrator | Tuesday 17 March 2026 00:56:54 +0000 (0:00:00.408) 0:08:33.658 ********* 2026-03-17 00:58:40.155207 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.155210 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.155213 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.155216 | orchestrator | 2026-03-17 00:58:40.155219 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-17 00:58:40.155222 | orchestrator | Tuesday 17 March 2026 00:56:54 +0000 (0:00:00.280) 0:08:33.938 ********* 2026-03-17 00:58:40.155225 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.155228 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.155231 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.155238 | orchestrator | 2026-03-17 00:58:40.155241 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-17 00:58:40.155244 | orchestrator | Tuesday 17 March 2026 00:56:55 +0000 (0:00:00.284) 0:08:34.223 ********* 2026-03-17 00:58:40.155247 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.155250 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.155253 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.155256 | orchestrator | 2026-03-17 00:58:40.155259 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-17 00:58:40.155262 | orchestrator | Tuesday 17 March 2026 00:56:55 +0000 (0:00:00.303) 0:08:34.526 ********* 2026-03-17 00:58:40.155265 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.155268 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.155271 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.155274 | orchestrator | 2026-03-17 00:58:40.155277 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-17 00:58:40.155280 | orchestrator | Tuesday 17 March 2026 00:56:56 +0000 (0:00:00.432) 0:08:34.958 ********* 2026-03-17 00:58:40.155283 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.155287 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.155290 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.155293 | orchestrator | 2026-03-17 00:58:40.155296 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-17 00:58:40.155299 | orchestrator | Tuesday 17 March 2026 00:56:56 +0000 (0:00:00.262) 0:08:35.220 ********* 2026-03-17 00:58:40.155302 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.155305 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.155308 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.155311 | orchestrator | 2026-03-17 00:58:40.155314 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-17 00:58:40.155317 | orchestrator | Tuesday 17 March 2026 00:56:56 +0000 (0:00:00.253) 0:08:35.474 ********* 2026-03-17 00:58:40.155320 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.155323 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.155326 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.155329 | orchestrator | 2026-03-17 00:58:40.155332 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-17 00:58:40.155335 | orchestrator | Tuesday 17 March 2026 00:56:56 +0000 (0:00:00.270) 0:08:35.744 ********* 2026-03-17 00:58:40.155339 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.155342 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.155345 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.155348 | orchestrator | 2026-03-17 00:58:40.155351 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-17 00:58:40.155354 | orchestrator | Tuesday 17 March 2026 00:56:57 +0000 (0:00:00.640) 0:08:36.385 ********* 2026-03-17 00:58:40.155357 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.155360 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.155363 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-17 00:58:40.155366 | orchestrator | 2026-03-17 00:58:40.155369 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-17 00:58:40.155372 | orchestrator | Tuesday 17 March 2026 00:56:57 +0000 (0:00:00.393) 0:08:36.778 ********* 2026-03-17 00:58:40.155375 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-17 00:58:40.155378 | orchestrator | 2026-03-17 00:58:40.155381 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-17 00:58:40.155411 | orchestrator | Tuesday 17 March 2026 00:56:59 +0000 (0:00:02.154) 0:08:38.933 ********* 2026-03-17 00:58:40.155416 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-17 00:58:40.155420 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.155428 | orchestrator | 2026-03-17 00:58:40.155431 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-17 00:58:40.155435 | orchestrator | Tuesday 17 March 2026 00:57:00 +0000 (0:00:00.223) 0:08:39.156 ********* 2026-03-17 00:58:40.155441 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-17 00:58:40.155450 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-17 00:58:40.155456 | orchestrator | 2026-03-17 00:58:40.155461 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-17 00:58:40.155466 | orchestrator | Tuesday 17 March 2026 00:57:08 +0000 (0:00:08.676) 0:08:47.833 ********* 2026-03-17 00:58:40.155473 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-17 00:58:40.155476 | orchestrator | 2026-03-17 00:58:40.155479 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-17 00:58:40.155482 | orchestrator | Tuesday 17 March 2026 00:57:12 +0000 (0:00:03.902) 0:08:51.736 ********* 2026-03-17 00:58:40.155485 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:58:40.155488 | orchestrator | 2026-03-17 00:58:40.155491 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-17 00:58:40.155494 | orchestrator | Tuesday 17 March 2026 00:57:13 +0000 (0:00:00.484) 0:08:52.220 ********* 2026-03-17 00:58:40.155497 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-17 00:58:40.155500 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-17 00:58:40.155503 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-17 00:58:40.155507 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-17 00:58:40.155510 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-17 00:58:40.155513 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-17 00:58:40.155516 | orchestrator | 2026-03-17 00:58:40.155519 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-17 00:58:40.155522 | orchestrator | Tuesday 17 March 2026 00:57:14 +0000 (0:00:01.006) 0:08:53.226 ********* 2026-03-17 00:58:40.155525 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 00:58:40.155528 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-17 00:58:40.155531 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-17 00:58:40.155534 | orchestrator | 2026-03-17 00:58:40.155537 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-17 00:58:40.155540 | orchestrator | Tuesday 17 March 2026 00:57:16 +0000 (0:00:02.146) 0:08:55.372 ********* 2026-03-17 00:58:40.155543 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-17 00:58:40.155547 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-17 00:58:40.155550 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:58:40.155553 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-17 00:58:40.155556 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-17 00:58:40.155559 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:58:40.155562 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-17 00:58:40.155565 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-17 00:58:40.155568 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:58:40.155571 | orchestrator | 2026-03-17 00:58:40.155574 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-17 00:58:40.155580 | orchestrator | Tuesday 17 March 2026 00:57:17 +0000 (0:00:01.233) 0:08:56.606 ********* 2026-03-17 00:58:40.155583 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:58:40.155586 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:58:40.155589 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:58:40.155592 | orchestrator | 2026-03-17 00:58:40.155595 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-17 00:58:40.155598 | orchestrator | Tuesday 17 March 2026 00:57:20 +0000 (0:00:02.452) 0:08:59.059 ********* 2026-03-17 00:58:40.155601 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.155604 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.155607 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.155610 | orchestrator | 2026-03-17 00:58:40.155613 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-17 00:58:40.155617 | orchestrator | Tuesday 17 March 2026 00:57:20 +0000 (0:00:00.280) 0:08:59.339 ********* 2026-03-17 00:58:40.155620 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:58:40.155623 | orchestrator | 2026-03-17 00:58:40.155626 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-17 00:58:40.155632 | orchestrator | Tuesday 17 March 2026 00:57:20 +0000 (0:00:00.599) 0:08:59.939 ********* 2026-03-17 00:58:40.155635 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:58:40.155638 | orchestrator | 2026-03-17 00:58:40.155641 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-17 00:58:40.155644 | orchestrator | Tuesday 17 March 2026 00:57:21 +0000 (0:00:00.504) 0:09:00.444 ********* 2026-03-17 00:58:40.155648 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:58:40.155651 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:58:40.155654 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:58:40.155657 | orchestrator | 2026-03-17 00:58:40.155660 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-17 00:58:40.155663 | orchestrator | Tuesday 17 March 2026 00:57:22 +0000 (0:00:01.339) 0:09:01.783 ********* 2026-03-17 00:58:40.155666 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:58:40.155669 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:58:40.155672 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:58:40.155675 | orchestrator | 2026-03-17 00:58:40.155678 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-17 00:58:40.155681 | orchestrator | Tuesday 17 March 2026 00:57:24 +0000 (0:00:01.464) 0:09:03.248 ********* 2026-03-17 00:58:40.155684 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:58:40.155687 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:58:40.155690 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:58:40.155693 | orchestrator | 2026-03-17 00:58:40.155697 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-17 00:58:40.155700 | orchestrator | Tuesday 17 March 2026 00:57:26 +0000 (0:00:01.998) 0:09:05.246 ********* 2026-03-17 00:58:40.155703 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:58:40.155708 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:58:40.155711 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:58:40.155714 | orchestrator | 2026-03-17 00:58:40.155717 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-17 00:58:40.155720 | orchestrator | Tuesday 17 March 2026 00:57:28 +0000 (0:00:01.863) 0:09:07.109 ********* 2026-03-17 00:58:40.155723 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.155726 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.155729 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.155733 | orchestrator | 2026-03-17 00:58:40.155736 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-17 00:58:40.155739 | orchestrator | Tuesday 17 March 2026 00:57:29 +0000 (0:00:01.252) 0:09:08.362 ********* 2026-03-17 00:58:40.155742 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:58:40.155747 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:58:40.155750 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:58:40.155753 | orchestrator | 2026-03-17 00:58:40.155757 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-17 00:58:40.155760 | orchestrator | Tuesday 17 March 2026 00:57:30 +0000 (0:00:00.650) 0:09:09.012 ********* 2026-03-17 00:58:40.155763 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:58:40.155766 | orchestrator | 2026-03-17 00:58:40.155769 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-17 00:58:40.155772 | orchestrator | Tuesday 17 March 2026 00:57:30 +0000 (0:00:00.588) 0:09:09.600 ********* 2026-03-17 00:58:40.155775 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.155778 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.155781 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.155784 | orchestrator | 2026-03-17 00:58:40.155787 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-17 00:58:40.155790 | orchestrator | Tuesday 17 March 2026 00:57:30 +0000 (0:00:00.262) 0:09:09.862 ********* 2026-03-17 00:58:40.155793 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:58:40.155797 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:58:40.155800 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:58:40.155803 | orchestrator | 2026-03-17 00:58:40.155807 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-17 00:58:40.155812 | orchestrator | Tuesday 17 March 2026 00:57:32 +0000 (0:00:01.105) 0:09:10.967 ********* 2026-03-17 00:58:40.155817 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 00:58:40.155823 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 00:58:40.155826 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 00:58:40.155829 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.155832 | orchestrator | 2026-03-17 00:58:40.155836 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-17 00:58:40.155839 | orchestrator | Tuesday 17 March 2026 00:57:32 +0000 (0:00:00.695) 0:09:11.663 ********* 2026-03-17 00:58:40.155842 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.155845 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.155848 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.155851 | orchestrator | 2026-03-17 00:58:40.155854 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-17 00:58:40.155857 | orchestrator | 2026-03-17 00:58:40.155860 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-17 00:58:40.155863 | orchestrator | Tuesday 17 March 2026 00:57:33 +0000 (0:00:00.625) 0:09:12.289 ********* 2026-03-17 00:58:40.155866 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:58:40.155869 | orchestrator | 2026-03-17 00:58:40.155872 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-17 00:58:40.155875 | orchestrator | Tuesday 17 March 2026 00:57:33 +0000 (0:00:00.445) 0:09:12.734 ********* 2026-03-17 00:58:40.155878 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:58:40.155882 | orchestrator | 2026-03-17 00:58:40.155885 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-17 00:58:40.155888 | orchestrator | Tuesday 17 March 2026 00:57:34 +0000 (0:00:00.588) 0:09:13.322 ********* 2026-03-17 00:58:40.155893 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.155896 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.155899 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.155902 | orchestrator | 2026-03-17 00:58:40.155905 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-17 00:58:40.155908 | orchestrator | Tuesday 17 March 2026 00:57:34 +0000 (0:00:00.273) 0:09:13.596 ********* 2026-03-17 00:58:40.155914 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.155917 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.155920 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.155923 | orchestrator | 2026-03-17 00:58:40.155926 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-17 00:58:40.155929 | orchestrator | Tuesday 17 March 2026 00:57:35 +0000 (0:00:00.670) 0:09:14.266 ********* 2026-03-17 00:58:40.155942 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.155948 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.155952 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.155958 | orchestrator | 2026-03-17 00:58:40.155961 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-17 00:58:40.155964 | orchestrator | Tuesday 17 March 2026 00:57:36 +0000 (0:00:00.809) 0:09:15.076 ********* 2026-03-17 00:58:40.155967 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.155970 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.155973 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.155976 | orchestrator | 2026-03-17 00:58:40.155979 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-17 00:58:40.155982 | orchestrator | Tuesday 17 March 2026 00:57:36 +0000 (0:00:00.663) 0:09:15.739 ********* 2026-03-17 00:58:40.155985 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.155988 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.155991 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.155994 | orchestrator | 2026-03-17 00:58:40.156000 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-17 00:58:40.156003 | orchestrator | Tuesday 17 March 2026 00:57:37 +0000 (0:00:00.305) 0:09:16.045 ********* 2026-03-17 00:58:40.156006 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.156009 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.156012 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.156015 | orchestrator | 2026-03-17 00:58:40.156018 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-17 00:58:40.156021 | orchestrator | Tuesday 17 March 2026 00:57:37 +0000 (0:00:00.300) 0:09:16.345 ********* 2026-03-17 00:58:40.156024 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.156027 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.156030 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.156034 | orchestrator | 2026-03-17 00:58:40.156037 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-17 00:58:40.156040 | orchestrator | Tuesday 17 March 2026 00:57:37 +0000 (0:00:00.286) 0:09:16.632 ********* 2026-03-17 00:58:40.156043 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.156046 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.156049 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.156052 | orchestrator | 2026-03-17 00:58:40.156055 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-17 00:58:40.156058 | orchestrator | Tuesday 17 March 2026 00:57:38 +0000 (0:00:00.997) 0:09:17.630 ********* 2026-03-17 00:58:40.156061 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.156065 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.156070 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.156075 | orchestrator | 2026-03-17 00:58:40.156080 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-17 00:58:40.156085 | orchestrator | Tuesday 17 March 2026 00:57:39 +0000 (0:00:00.701) 0:09:18.332 ********* 2026-03-17 00:58:40.156091 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.156096 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.156101 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.156105 | orchestrator | 2026-03-17 00:58:40.156108 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-17 00:58:40.156112 | orchestrator | Tuesday 17 March 2026 00:57:39 +0000 (0:00:00.284) 0:09:18.616 ********* 2026-03-17 00:58:40.156117 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.156126 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.156131 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.156136 | orchestrator | 2026-03-17 00:58:40.156142 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-17 00:58:40.156146 | orchestrator | Tuesday 17 March 2026 00:57:39 +0000 (0:00:00.279) 0:09:18.896 ********* 2026-03-17 00:58:40.156149 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.156152 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.156155 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.156158 | orchestrator | 2026-03-17 00:58:40.156161 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-17 00:58:40.156164 | orchestrator | Tuesday 17 March 2026 00:57:40 +0000 (0:00:00.647) 0:09:19.544 ********* 2026-03-17 00:58:40.156167 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.156170 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.156173 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.156176 | orchestrator | 2026-03-17 00:58:40.156179 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-17 00:58:40.156182 | orchestrator | Tuesday 17 March 2026 00:57:40 +0000 (0:00:00.331) 0:09:19.875 ********* 2026-03-17 00:58:40.156185 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.156188 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.156191 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.156194 | orchestrator | 2026-03-17 00:58:40.156197 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-17 00:58:40.156200 | orchestrator | Tuesday 17 March 2026 00:57:41 +0000 (0:00:00.316) 0:09:20.192 ********* 2026-03-17 00:58:40.156203 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.156207 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.156210 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.156213 | orchestrator | 2026-03-17 00:58:40.156216 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-17 00:58:40.156219 | orchestrator | Tuesday 17 March 2026 00:57:41 +0000 (0:00:00.281) 0:09:20.474 ********* 2026-03-17 00:58:40.156222 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.156225 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.156230 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.156233 | orchestrator | 2026-03-17 00:58:40.156236 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-17 00:58:40.156239 | orchestrator | Tuesday 17 March 2026 00:57:42 +0000 (0:00:00.551) 0:09:21.026 ********* 2026-03-17 00:58:40.156242 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.156245 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.156248 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.156251 | orchestrator | 2026-03-17 00:58:40.156254 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-17 00:58:40.156258 | orchestrator | Tuesday 17 March 2026 00:57:42 +0000 (0:00:00.319) 0:09:21.345 ********* 2026-03-17 00:58:40.156261 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.156264 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.156267 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.156270 | orchestrator | 2026-03-17 00:58:40.156273 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-17 00:58:40.156276 | orchestrator | Tuesday 17 March 2026 00:57:42 +0000 (0:00:00.316) 0:09:21.661 ********* 2026-03-17 00:58:40.156279 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.156282 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.156285 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.156288 | orchestrator | 2026-03-17 00:58:40.156291 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-17 00:58:40.156294 | orchestrator | Tuesday 17 March 2026 00:57:43 +0000 (0:00:00.731) 0:09:22.393 ********* 2026-03-17 00:58:40.156297 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:58:40.156300 | orchestrator | 2026-03-17 00:58:40.156303 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-17 00:58:40.156312 | orchestrator | Tuesday 17 March 2026 00:57:43 +0000 (0:00:00.470) 0:09:22.863 ********* 2026-03-17 00:58:40.156317 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 00:58:40.156322 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-17 00:58:40.156328 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-17 00:58:40.156333 | orchestrator | 2026-03-17 00:58:40.156339 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-17 00:58:40.156344 | orchestrator | Tuesday 17 March 2026 00:57:46 +0000 (0:00:02.510) 0:09:25.374 ********* 2026-03-17 00:58:40.156350 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-17 00:58:40.156353 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-17 00:58:40.156356 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-17 00:58:40.156359 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:58:40.156362 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-17 00:58:40.156365 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:58:40.156368 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-17 00:58:40.156371 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-17 00:58:40.156375 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:58:40.156378 | orchestrator | 2026-03-17 00:58:40.156381 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-17 00:58:40.156384 | orchestrator | Tuesday 17 March 2026 00:57:47 +0000 (0:00:01.265) 0:09:26.640 ********* 2026-03-17 00:58:40.156387 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.156390 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.156393 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.156396 | orchestrator | 2026-03-17 00:58:40.156399 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-17 00:58:40.156402 | orchestrator | Tuesday 17 March 2026 00:57:47 +0000 (0:00:00.276) 0:09:26.917 ********* 2026-03-17 00:58:40.156405 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:58:40.156408 | orchestrator | 2026-03-17 00:58:40.156411 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-17 00:58:40.156414 | orchestrator | Tuesday 17 March 2026 00:57:48 +0000 (0:00:00.487) 0:09:27.404 ********* 2026-03-17 00:58:40.156418 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-17 00:58:40.156421 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-17 00:58:40.156424 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-17 00:58:40.156427 | orchestrator | 2026-03-17 00:58:40.156430 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-17 00:58:40.156434 | orchestrator | Tuesday 17 March 2026 00:57:49 +0000 (0:00:01.256) 0:09:28.661 ********* 2026-03-17 00:58:40.156437 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 00:58:40.156440 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-17 00:58:40.156443 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 00:58:40.156446 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-17 00:58:40.156449 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 00:58:40.156454 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-17 00:58:40.156459 | orchestrator | 2026-03-17 00:58:40.156463 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-17 00:58:40.156466 | orchestrator | Tuesday 17 March 2026 00:57:54 +0000 (0:00:05.267) 0:09:33.929 ********* 2026-03-17 00:58:40.156469 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 00:58:40.156472 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-17 00:58:40.156475 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 00:58:40.156478 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-17 00:58:40.156481 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 00:58:40.156484 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-17 00:58:40.156487 | orchestrator | 2026-03-17 00:58:40.156490 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-17 00:58:40.156493 | orchestrator | Tuesday 17 March 2026 00:57:58 +0000 (0:00:03.250) 0:09:37.180 ********* 2026-03-17 00:58:40.156496 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-17 00:58:40.156499 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:58:40.156502 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-17 00:58:40.156505 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:58:40.156508 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-17 00:58:40.156511 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:58:40.156515 | orchestrator | 2026-03-17 00:58:40.156518 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-17 00:58:40.156523 | orchestrator | Tuesday 17 March 2026 00:57:59 +0000 (0:00:01.094) 0:09:38.274 ********* 2026-03-17 00:58:40.156526 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-17 00:58:40.156529 | orchestrator | 2026-03-17 00:58:40.156532 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-17 00:58:40.156535 | orchestrator | Tuesday 17 March 2026 00:57:59 +0000 (0:00:00.204) 0:09:38.479 ********* 2026-03-17 00:58:40.156538 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-17 00:58:40.156542 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-17 00:58:40.156545 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-17 00:58:40.156548 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-17 00:58:40.156551 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-17 00:58:40.156554 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.156557 | orchestrator | 2026-03-17 00:58:40.156560 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-17 00:58:40.156563 | orchestrator | Tuesday 17 March 2026 00:58:00 +0000 (0:00:00.831) 0:09:39.310 ********* 2026-03-17 00:58:40.156566 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-17 00:58:40.156570 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-17 00:58:40.156573 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-17 00:58:40.156576 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-17 00:58:40.156581 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-17 00:58:40.156584 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.156587 | orchestrator | 2026-03-17 00:58:40.156591 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-17 00:58:40.156594 | orchestrator | Tuesday 17 March 2026 00:58:00 +0000 (0:00:00.529) 0:09:39.840 ********* 2026-03-17 00:58:40.156597 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-17 00:58:40.156600 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-17 00:58:40.156603 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-17 00:58:40.156606 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-17 00:58:40.156611 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-17 00:58:40.156614 | orchestrator | 2026-03-17 00:58:40.156617 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-17 00:58:40.156620 | orchestrator | Tuesday 17 March 2026 00:58:28 +0000 (0:00:27.634) 0:10:07.474 ********* 2026-03-17 00:58:40.156623 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.156627 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.156630 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.156633 | orchestrator | 2026-03-17 00:58:40.156636 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-17 00:58:40.156639 | orchestrator | Tuesday 17 March 2026 00:58:28 +0000 (0:00:00.329) 0:10:07.804 ********* 2026-03-17 00:58:40.156642 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.156645 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.156648 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.156651 | orchestrator | 2026-03-17 00:58:40.156654 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-17 00:58:40.156657 | orchestrator | Tuesday 17 March 2026 00:58:29 +0000 (0:00:00.312) 0:10:08.116 ********* 2026-03-17 00:58:40.156660 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:58:40.156664 | orchestrator | 2026-03-17 00:58:40.156668 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-17 00:58:40.156673 | orchestrator | Tuesday 17 March 2026 00:58:29 +0000 (0:00:00.741) 0:10:08.857 ********* 2026-03-17 00:58:40.156678 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:58:40.156684 | orchestrator | 2026-03-17 00:58:40.156691 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-17 00:58:40.156697 | orchestrator | Tuesday 17 March 2026 00:58:30 +0000 (0:00:00.539) 0:10:09.397 ********* 2026-03-17 00:58:40.156700 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:58:40.156703 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:58:40.156706 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:58:40.156709 | orchestrator | 2026-03-17 00:58:40.156712 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-17 00:58:40.156715 | orchestrator | Tuesday 17 March 2026 00:58:31 +0000 (0:00:01.106) 0:10:10.504 ********* 2026-03-17 00:58:40.156719 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:58:40.156722 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:58:40.156725 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:58:40.156730 | orchestrator | 2026-03-17 00:58:40.156733 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-17 00:58:40.156736 | orchestrator | Tuesday 17 March 2026 00:58:33 +0000 (0:00:01.473) 0:10:11.978 ********* 2026-03-17 00:58:40.156739 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:58:40.156742 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:58:40.156746 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:58:40.156749 | orchestrator | 2026-03-17 00:58:40.156752 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-17 00:58:40.156755 | orchestrator | Tuesday 17 March 2026 00:58:34 +0000 (0:00:01.726) 0:10:13.704 ********* 2026-03-17 00:58:40.156758 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-17 00:58:40.156761 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-17 00:58:40.156764 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-17 00:58:40.156767 | orchestrator | 2026-03-17 00:58:40.156770 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-17 00:58:40.156773 | orchestrator | Tuesday 17 March 2026 00:58:36 +0000 (0:00:02.209) 0:10:15.914 ********* 2026-03-17 00:58:40.156776 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.156779 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.156783 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.156786 | orchestrator | 2026-03-17 00:58:40.156789 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-17 00:58:40.156792 | orchestrator | Tuesday 17 March 2026 00:58:37 +0000 (0:00:00.289) 0:10:16.204 ********* 2026-03-17 00:58:40.156795 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:58:40.156798 | orchestrator | 2026-03-17 00:58:40.156801 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-17 00:58:40.156804 | orchestrator | Tuesday 17 March 2026 00:58:37 +0000 (0:00:00.451) 0:10:16.656 ********* 2026-03-17 00:58:40.156807 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.156810 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.156813 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.156816 | orchestrator | 2026-03-17 00:58:40.156819 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-17 00:58:40.156822 | orchestrator | Tuesday 17 March 2026 00:58:38 +0000 (0:00:00.406) 0:10:17.062 ********* 2026-03-17 00:58:40.156826 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.156829 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:40.156832 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:40.156835 | orchestrator | 2026-03-17 00:58:40.156838 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-17 00:58:40.156841 | orchestrator | Tuesday 17 March 2026 00:58:38 +0000 (0:00:00.294) 0:10:17.357 ********* 2026-03-17 00:58:40.156844 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 00:58:40.156847 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 00:58:40.156850 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 00:58:40.156853 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:40.156856 | orchestrator | 2026-03-17 00:58:40.156861 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-17 00:58:40.156864 | orchestrator | Tuesday 17 March 2026 00:58:38 +0000 (0:00:00.546) 0:10:17.904 ********* 2026-03-17 00:58:40.156867 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:40.156870 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:40.156873 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:40.156876 | orchestrator | 2026-03-17 00:58:40.156880 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:58:40.156886 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-17 00:58:40.156889 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-17 00:58:40.156892 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-17 00:58:40.156895 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-17 00:58:40.156898 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-17 00:58:40.156903 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-17 00:58:40.156906 | orchestrator | 2026-03-17 00:58:40.156910 | orchestrator | 2026-03-17 00:58:40.156913 | orchestrator | 2026-03-17 00:58:40.156916 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:58:40.156919 | orchestrator | Tuesday 17 March 2026 00:58:39 +0000 (0:00:00.203) 0:10:18.107 ********* 2026-03-17 00:58:40.156922 | orchestrator | =============================================================================== 2026-03-17 00:58:40.156925 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 61.28s 2026-03-17 00:58:40.156928 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 37.72s 2026-03-17 00:58:40.156955 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.19s 2026-03-17 00:58:40.156960 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 27.63s 2026-03-17 00:58:40.156963 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 13.72s 2026-03-17 00:58:40.156966 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.51s 2026-03-17 00:58:40.156969 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.56s 2026-03-17 00:58:40.156972 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 8.70s 2026-03-17 00:58:40.156975 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.68s 2026-03-17 00:58:40.156978 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.48s 2026-03-17 00:58:40.156981 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.10s 2026-03-17 00:58:40.156984 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 5.27s 2026-03-17 00:58:40.156987 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.85s 2026-03-17 00:58:40.156990 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 4.54s 2026-03-17 00:58:40.156993 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.52s 2026-03-17 00:58:40.156996 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.00s 2026-03-17 00:58:40.157000 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.90s 2026-03-17 00:58:40.157003 | orchestrator | ceph-container-common : Enable ceph.target ------------------------------ 3.75s 2026-03-17 00:58:40.157006 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.43s 2026-03-17 00:58:40.157009 | orchestrator | ceph-rgw : Get keys from monitors --------------------------------------- 3.25s 2026-03-17 00:58:40.157012 | orchestrator | 2026-03-17 00:58:40 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:58:40.157015 | orchestrator | 2026-03-17 00:58:40 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:58:40.157018 | orchestrator | 2026-03-17 00:58:40 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:43.192119 | orchestrator | 2026-03-17 00:58:43 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 00:58:43.194499 | orchestrator | 2026-03-17 00:58:43 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:58:43.196181 | orchestrator | 2026-03-17 00:58:43 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:58:43.196780 | orchestrator | 2026-03-17 00:58:43 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:46.246568 | orchestrator | 2026-03-17 00:58:46 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 00:58:46.248750 | orchestrator | 2026-03-17 00:58:46 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:58:46.251115 | orchestrator | 2026-03-17 00:58:46 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:58:46.251157 | orchestrator | 2026-03-17 00:58:46 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:49.293905 | orchestrator | 2026-03-17 00:58:49 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 00:58:49.295987 | orchestrator | 2026-03-17 00:58:49 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:58:49.298497 | orchestrator | 2026-03-17 00:58:49 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:58:49.298962 | orchestrator | 2026-03-17 00:58:49 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:52.335275 | orchestrator | 2026-03-17 00:58:52 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 00:58:52.337914 | orchestrator | 2026-03-17 00:58:52 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:58:52.339505 | orchestrator | 2026-03-17 00:58:52 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:58:52.339620 | orchestrator | 2026-03-17 00:58:52 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:55.386010 | orchestrator | 2026-03-17 00:58:55 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 00:58:55.386615 | orchestrator | 2026-03-17 00:58:55 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:58:55.387773 | orchestrator | 2026-03-17 00:58:55 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state STARTED 2026-03-17 00:58:55.387848 | orchestrator | 2026-03-17 00:58:55 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:58.418240 | orchestrator | 2026-03-17 00:58:58 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 00:58:58.419312 | orchestrator | 2026-03-17 00:58:58 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:58:58.423960 | orchestrator | 2026-03-17 00:58:58.424018 | orchestrator | 2026-03-17 00:58:58.424026 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 00:58:58.424032 | orchestrator | 2026-03-17 00:58:58.424037 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 00:58:58.424043 | orchestrator | Tuesday 17 March 2026 00:56:42 +0000 (0:00:00.247) 0:00:00.247 ********* 2026-03-17 00:58:58.424048 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:58.424054 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:58.424060 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:58.424066 | orchestrator | 2026-03-17 00:58:58.424071 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 00:58:58.424076 | orchestrator | Tuesday 17 March 2026 00:56:42 +0000 (0:00:00.256) 0:00:00.503 ********* 2026-03-17 00:58:58.424097 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-17 00:58:58.424103 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-17 00:58:58.424108 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-17 00:58:58.424113 | orchestrator | 2026-03-17 00:58:58.424119 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-17 00:58:58.424124 | orchestrator | 2026-03-17 00:58:58.424129 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-17 00:58:58.424135 | orchestrator | Tuesday 17 March 2026 00:56:42 +0000 (0:00:00.355) 0:00:00.858 ********* 2026-03-17 00:58:58.424140 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:58:58.424145 | orchestrator | 2026-03-17 00:58:58.424151 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-17 00:58:58.424156 | orchestrator | Tuesday 17 March 2026 00:56:43 +0000 (0:00:00.423) 0:00:01.282 ********* 2026-03-17 00:58:58.424162 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-17 00:58:58.424173 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-17 00:58:58.424179 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-17 00:58:58.424188 | orchestrator | 2026-03-17 00:58:58.424194 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-17 00:58:58.424199 | orchestrator | Tuesday 17 March 2026 00:56:43 +0000 (0:00:00.647) 0:00:01.929 ********* 2026-03-17 00:58:58.424216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-17 00:58:58.424227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-17 00:58:58.424244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-17 00:58:58.424256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-17 00:58:58.424265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-17 00:58:58.424271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-17 00:58:58.424276 | orchestrator | 2026-03-17 00:58:58.424281 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-17 00:58:58.424285 | orchestrator | Tuesday 17 March 2026 00:56:45 +0000 (0:00:01.600) 0:00:03.529 ********* 2026-03-17 00:58:58.424290 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:58:58.424294 | orchestrator | 2026-03-17 00:58:58.424299 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-17 00:58:58.424304 | orchestrator | Tuesday 17 March 2026 00:56:45 +0000 (0:00:00.452) 0:00:03.982 ********* 2026-03-17 00:58:58.424321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-17 00:58:58.424327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-17 00:58:58.424332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-17 00:58:58.424339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-17 00:58:58.424349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-17 00:58:58.424358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-17 00:58:58.424364 | orchestrator | 2026-03-17 00:58:58.424369 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-17 00:58:58.424374 | orchestrator | Tuesday 17 March 2026 00:56:48 +0000 (0:00:02.731) 0:00:06.713 ********* 2026-03-17 00:58:58.424382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-17 00:58:58.424387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-17 00:58:58.424396 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:58.424405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-17 00:58:58.424411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-17 00:58:58.424417 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:58.424422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-17 00:58:58.424430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-17 00:58:58.424439 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:58.424444 | orchestrator | 2026-03-17 00:58:58.424449 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-17 00:58:58.424455 | orchestrator | Tuesday 17 March 2026 00:56:49 +0000 (0:00:01.078) 0:00:07.792 ********* 2026-03-17 00:58:58.424464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-17 00:58:58.424470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-17 00:58:58.424475 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:58.424480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-17 00:58:58.424488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-17 00:58:58.424501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-17 00:58:58.424508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-17 00:58:58.424513 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:58.424519 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:58.424524 | orchestrator | 2026-03-17 00:58:58.424530 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-17 00:58:58.424535 | orchestrator | Tuesday 17 March 2026 00:56:50 +0000 (0:00:00.912) 0:00:08.704 ********* 2026-03-17 00:58:58.424541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-17 00:58:58.424549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-17 00:58:58.424576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-17 00:58:58.424587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-17 00:58:58.424594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-17 00:58:58.424603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-17 00:58:58.424612 | orchestrator | 2026-03-17 00:58:58.424621 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-17 00:58:58.424627 | orchestrator | Tuesday 17 March 2026 00:56:52 +0000 (0:00:02.302) 0:00:11.007 ********* 2026-03-17 00:58:58.424632 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:58.424637 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:58.424642 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:58.424647 | orchestrator | 2026-03-17 00:58:58.424652 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-17 00:58:58.424657 | orchestrator | Tuesday 17 March 2026 00:56:55 +0000 (0:00:02.276) 0:00:13.283 ********* 2026-03-17 00:58:58.424660 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:58.424664 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:58.424668 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:58.424672 | orchestrator | 2026-03-17 00:58:58.424677 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-17 00:58:58.424682 | orchestrator | Tuesday 17 March 2026 00:56:56 +0000 (0:00:01.786) 0:00:15.069 ********* 2026-03-17 00:58:58.424692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/et2026-03-17 00:58:58 | INFO  | Task 5cf158f1-9eb6-407b-9475-43b56e37c2f2 is in state SUCCESS 2026-03-17 00:58:58.424699 | orchestrator | 2026-03-17 00:58:58 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:58.424705 | orchestrator | c/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-17 00:58:58.424712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-17 00:58:58.424720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-17 00:58:58.424737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-17 00:58:58.424748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-17 00:58:58.424754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-17 00:58:58.424760 | orchestrator | 2026-03-17 00:58:58.424765 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-17 00:58:58.424771 | orchestrator | Tuesday 17 March 2026 00:56:59 +0000 (0:00:02.101) 0:00:17.171 ********* 2026-03-17 00:58:58.424776 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:58.424782 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:58.424787 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:58.424796 | orchestrator | 2026-03-17 00:58:58.424803 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-17 00:58:58.424808 | orchestrator | Tuesday 17 March 2026 00:56:59 +0000 (0:00:00.240) 0:00:17.412 ********* 2026-03-17 00:58:58.424814 | orchestrator | 2026-03-17 00:58:58.424819 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-17 00:58:58.424824 | orchestrator | Tuesday 17 March 2026 00:56:59 +0000 (0:00:00.057) 0:00:17.469 ********* 2026-03-17 00:58:58.424830 | orchestrator | 2026-03-17 00:58:58.424836 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-17 00:58:58.424844 | orchestrator | Tuesday 17 March 2026 00:56:59 +0000 (0:00:00.057) 0:00:17.527 ********* 2026-03-17 00:58:58.424850 | orchestrator | 2026-03-17 00:58:58.424855 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-17 00:58:58.424861 | orchestrator | Tuesday 17 March 2026 00:56:59 +0000 (0:00:00.060) 0:00:17.587 ********* 2026-03-17 00:58:58.424867 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:58.424872 | orchestrator | 2026-03-17 00:58:58.424878 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-17 00:58:58.424883 | orchestrator | Tuesday 17 March 2026 00:56:59 +0000 (0:00:00.452) 0:00:18.040 ********* 2026-03-17 00:58:58.424888 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:58.424894 | orchestrator | 2026-03-17 00:58:58.424899 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-17 00:58:58.424905 | orchestrator | Tuesday 17 March 2026 00:57:00 +0000 (0:00:00.203) 0:00:18.244 ********* 2026-03-17 00:58:58.424910 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:58.424928 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:58.424934 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:58.424939 | orchestrator | 2026-03-17 00:58:58.424943 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-17 00:58:58.424948 | orchestrator | Tuesday 17 March 2026 00:57:49 +0000 (0:00:48.975) 0:01:07.219 ********* 2026-03-17 00:58:58.424953 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:58.424957 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:58.424962 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:58.424966 | orchestrator | 2026-03-17 00:58:58.424971 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-17 00:58:58.424976 | orchestrator | Tuesday 17 March 2026 00:58:45 +0000 (0:00:56.125) 0:02:03.345 ********* 2026-03-17 00:58:58.424981 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:58:58.424986 | orchestrator | 2026-03-17 00:58:58.424990 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-17 00:58:58.424995 | orchestrator | Tuesday 17 March 2026 00:58:45 +0000 (0:00:00.637) 0:02:03.983 ********* 2026-03-17 00:58:58.425000 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:58.425006 | orchestrator | 2026-03-17 00:58:58.425011 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-17 00:58:58.425015 | orchestrator | Tuesday 17 March 2026 00:58:48 +0000 (0:00:02.752) 0:02:06.735 ********* 2026-03-17 00:58:58.425020 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:58.425026 | orchestrator | 2026-03-17 00:58:58.425030 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-17 00:58:58.425033 | orchestrator | Tuesday 17 March 2026 00:58:51 +0000 (0:00:02.766) 0:02:09.502 ********* 2026-03-17 00:58:58.425037 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:58.425040 | orchestrator | 2026-03-17 00:58:58.425047 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-17 00:58:58.425051 | orchestrator | Tuesday 17 March 2026 00:58:53 +0000 (0:00:02.481) 0:02:11.983 ********* 2026-03-17 00:58:58.425054 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:58.425057 | orchestrator | 2026-03-17 00:58:58.425060 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:58:58.425069 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-17 00:58:58.425073 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-17 00:58:58.425076 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-17 00:58:58.425079 | orchestrator | 2026-03-17 00:58:58.425082 | orchestrator | 2026-03-17 00:58:58.425086 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:58:58.425089 | orchestrator | Tuesday 17 March 2026 00:58:56 +0000 (0:00:02.847) 0:02:14.831 ********* 2026-03-17 00:58:58.425092 | orchestrator | =============================================================================== 2026-03-17 00:58:58.425095 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 56.13s 2026-03-17 00:58:58.425099 | orchestrator | opensearch : Restart opensearch container ------------------------------ 48.98s 2026-03-17 00:58:58.425102 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.85s 2026-03-17 00:58:58.425105 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.77s 2026-03-17 00:58:58.425108 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.75s 2026-03-17 00:58:58.425111 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.73s 2026-03-17 00:58:58.425115 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.48s 2026-03-17 00:58:58.425118 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.30s 2026-03-17 00:58:58.425121 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.28s 2026-03-17 00:58:58.425125 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.10s 2026-03-17 00:58:58.425130 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.79s 2026-03-17 00:58:58.425135 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.60s 2026-03-17 00:58:58.425140 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.08s 2026-03-17 00:58:58.425145 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.91s 2026-03-17 00:58:58.425149 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.65s 2026-03-17 00:58:58.425155 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.64s 2026-03-17 00:58:58.425158 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 0.45s 2026-03-17 00:58:58.425161 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.45s 2026-03-17 00:58:58.425164 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.42s 2026-03-17 00:58:58.425168 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.36s 2026-03-17 00:59:01.462822 | orchestrator | 2026-03-17 00:59:01 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 00:59:01.463679 | orchestrator | 2026-03-17 00:59:01 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:59:01.463942 | orchestrator | 2026-03-17 00:59:01 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:04.501786 | orchestrator | 2026-03-17 00:59:04 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 00:59:04.502106 | orchestrator | 2026-03-17 00:59:04 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:59:04.502128 | orchestrator | 2026-03-17 00:59:04 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:07.538145 | orchestrator | 2026-03-17 00:59:07 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 00:59:07.540600 | orchestrator | 2026-03-17 00:59:07 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:59:07.540955 | orchestrator | 2026-03-17 00:59:07 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:10.584084 | orchestrator | 2026-03-17 00:59:10 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 00:59:10.586184 | orchestrator | 2026-03-17 00:59:10 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:59:10.586506 | orchestrator | 2026-03-17 00:59:10 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:13.635893 | orchestrator | 2026-03-17 00:59:13 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 00:59:13.637997 | orchestrator | 2026-03-17 00:59:13 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:59:13.638074 | orchestrator | 2026-03-17 00:59:13 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:16.685293 | orchestrator | 2026-03-17 00:59:16 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 00:59:16.687282 | orchestrator | 2026-03-17 00:59:16 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:59:16.687436 | orchestrator | 2026-03-17 00:59:16 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:19.731302 | orchestrator | 2026-03-17 00:59:19 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 00:59:19.733536 | orchestrator | 2026-03-17 00:59:19 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:59:19.733657 | orchestrator | 2026-03-17 00:59:19 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:22.775995 | orchestrator | 2026-03-17 00:59:22 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 00:59:22.777624 | orchestrator | 2026-03-17 00:59:22 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:59:22.777675 | orchestrator | 2026-03-17 00:59:22 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:25.814671 | orchestrator | 2026-03-17 00:59:25 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 00:59:25.817470 | orchestrator | 2026-03-17 00:59:25 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state STARTED 2026-03-17 00:59:25.817531 | orchestrator | 2026-03-17 00:59:25 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:28.855298 | orchestrator | 2026-03-17 00:59:28 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 00:59:28.858864 | orchestrator | 2026-03-17 00:59:28.858968 | orchestrator | 2026-03-17 00:59:28 | INFO  | Task b6f7476f-5b48-451d-b30b-bd8e8057ba4e is in state SUCCESS 2026-03-17 00:59:28.860965 | orchestrator | 2026-03-17 00:59:28.861034 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-03-17 00:59:28.861047 | orchestrator | 2026-03-17 00:59:28.861056 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-17 00:59:28.861065 | orchestrator | Tuesday 17 March 2026 00:56:41 +0000 (0:00:00.074) 0:00:00.074 ********* 2026-03-17 00:59:28.861073 | orchestrator | ok: [localhost] => { 2026-03-17 00:59:28.861086 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-03-17 00:59:28.861101 | orchestrator | } 2026-03-17 00:59:28.861120 | orchestrator | 2026-03-17 00:59:28.861446 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-03-17 00:59:28.861464 | orchestrator | Tuesday 17 March 2026 00:56:41 +0000 (0:00:00.040) 0:00:00.114 ********* 2026-03-17 00:59:28.861492 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-03-17 00:59:28.861502 | orchestrator | ...ignoring 2026-03-17 00:59:28.861510 | orchestrator | 2026-03-17 00:59:28.861518 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-03-17 00:59:28.861526 | orchestrator | Tuesday 17 March 2026 00:56:44 +0000 (0:00:02.732) 0:00:02.847 ********* 2026-03-17 00:59:28.861534 | orchestrator | skipping: [localhost] 2026-03-17 00:59:28.861542 | orchestrator | 2026-03-17 00:59:28.861550 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-03-17 00:59:28.861558 | orchestrator | Tuesday 17 March 2026 00:56:44 +0000 (0:00:00.045) 0:00:02.893 ********* 2026-03-17 00:59:28.861566 | orchestrator | ok: [localhost] 2026-03-17 00:59:28.861574 | orchestrator | 2026-03-17 00:59:28.861583 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 00:59:28.861591 | orchestrator | 2026-03-17 00:59:28.861598 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 00:59:28.861606 | orchestrator | Tuesday 17 March 2026 00:56:44 +0000 (0:00:00.133) 0:00:03.027 ********* 2026-03-17 00:59:28.861614 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:28.861622 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:28.861630 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:28.861637 | orchestrator | 2026-03-17 00:59:28.861645 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 00:59:28.861653 | orchestrator | Tuesday 17 March 2026 00:56:45 +0000 (0:00:00.249) 0:00:03.276 ********* 2026-03-17 00:59:28.861661 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-17 00:59:28.861669 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-17 00:59:28.861677 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-17 00:59:28.861685 | orchestrator | 2026-03-17 00:59:28.861692 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-17 00:59:28.861700 | orchestrator | 2026-03-17 00:59:28.861708 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-17 00:59:28.861716 | orchestrator | Tuesday 17 March 2026 00:56:45 +0000 (0:00:00.459) 0:00:03.736 ********* 2026-03-17 00:59:28.861724 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-17 00:59:28.861732 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-17 00:59:28.861740 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-17 00:59:28.861747 | orchestrator | 2026-03-17 00:59:28.861755 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-17 00:59:28.861763 | orchestrator | Tuesday 17 March 2026 00:56:45 +0000 (0:00:00.350) 0:00:04.086 ********* 2026-03-17 00:59:28.861771 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:59:28.861780 | orchestrator | 2026-03-17 00:59:28.861788 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-17 00:59:28.861795 | orchestrator | Tuesday 17 March 2026 00:56:46 +0000 (0:00:00.500) 0:00:04.586 ********* 2026-03-17 00:59:28.861824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-17 00:59:28.861850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-17 00:59:28.861860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-17 00:59:28.861874 | orchestrator | 2026-03-17 00:59:28.861989 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-17 00:59:28.862010 | orchestrator | Tuesday 17 March 2026 00:56:49 +0000 (0:00:03.013) 0:00:07.599 ********* 2026-03-17 00:59:28.862074 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:28.862089 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:28.862102 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:28.862115 | orchestrator | 2026-03-17 00:59:28.862130 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-17 00:59:28.862143 | orchestrator | Tuesday 17 March 2026 00:56:50 +0000 (0:00:00.671) 0:00:08.270 ********* 2026-03-17 00:59:28.862163 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:28.862177 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:28.862190 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:28.862203 | orchestrator | 2026-03-17 00:59:28.862216 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-17 00:59:28.862224 | orchestrator | Tuesday 17 March 2026 00:56:51 +0000 (0:00:01.374) 0:00:09.644 ********* 2026-03-17 00:59:28.862233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-17 00:59:28.862254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-17 00:59:28.862276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-17 00:59:28.862285 | orchestrator | 2026-03-17 00:59:28.862293 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-17 00:59:28.862301 | orchestrator | Tuesday 17 March 2026 00:56:54 +0000 (0:00:03.245) 0:00:12.890 ********* 2026-03-17 00:59:28.862309 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:28.862317 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:28.862325 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:28.862333 | orchestrator | 2026-03-17 00:59:28.862341 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-17 00:59:28.862349 | orchestrator | Tuesday 17 March 2026 00:56:55 +0000 (0:00:01.072) 0:00:13.962 ********* 2026-03-17 00:59:28.862356 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:28.862364 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:28.862377 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:28.862385 | orchestrator | 2026-03-17 00:59:28.862393 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-17 00:59:28.862401 | orchestrator | Tuesday 17 March 2026 00:56:59 +0000 (0:00:03.821) 0:00:17.784 ********* 2026-03-17 00:59:28.862409 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:59:28.862417 | orchestrator | 2026-03-17 00:59:28.862425 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-17 00:59:28.862437 | orchestrator | Tuesday 17 March 2026 00:57:00 +0000 (0:00:00.445) 0:00:18.230 ********* 2026-03-17 00:59:28.862466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 00:59:28.862481 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:28.862495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 00:59:28.862514 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:28.862532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 00:59:28.862542 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:28.862550 | orchestrator | 2026-03-17 00:59:28.862558 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-17 00:59:28.862566 | orchestrator | Tuesday 17 March 2026 00:57:02 +0000 (0:00:02.917) 0:00:21.148 ********* 2026-03-17 00:59:28.862574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 00:59:28.862596 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:28.862610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 00:59:28.862619 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:28.862632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 00:59:28.862645 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:28.862653 | orchestrator | 2026-03-17 00:59:28.862661 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-17 00:59:28.862669 | orchestrator | Tuesday 17 March 2026 00:57:05 +0000 (0:00:02.459) 0:00:23.607 ********* 2026-03-17 00:59:28.862683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 00:59:28.862692 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:28.862704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 00:59:28.862719 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:28.862728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 00:59:28.862736 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:28.862744 | orchestrator | 2026-03-17 00:59:28.862752 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-17 00:59:28.862760 | orchestrator | Tuesday 17 March 2026 00:57:07 +0000 (0:00:02.411) 0:00:26.019 ********* 2026-03-17 00:59:28.862778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-17 00:59:28.862795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-17 00:59:28.862814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-17 00:59:28.862824 | orchestrator | 2026-03-17 00:59:28.862832 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-17 00:59:28.862845 | orchestrator | Tuesday 17 March 2026 00:57:10 +0000 (0:00:03.042) 0:00:29.061 ********* 2026-03-17 00:59:28.862853 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:28.862861 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:28.862869 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:28.862876 | orchestrator | 2026-03-17 00:59:28.862884 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-17 00:59:28.862920 | orchestrator | Tuesday 17 March 2026 00:57:11 +0000 (0:00:01.010) 0:00:30.072 ********* 2026-03-17 00:59:28.862929 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:28.862937 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:28.862945 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:28.862953 | orchestrator | 2026-03-17 00:59:28.862961 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-17 00:59:28.862969 | orchestrator | Tuesday 17 March 2026 00:57:12 +0000 (0:00:00.304) 0:00:30.376 ********* 2026-03-17 00:59:28.862977 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:28.862985 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:28.862993 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:28.863001 | orchestrator | 2026-03-17 00:59:28.863009 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-17 00:59:28.863017 | orchestrator | Tuesday 17 March 2026 00:57:12 +0000 (0:00:00.298) 0:00:30.675 ********* 2026-03-17 00:59:28.863027 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-17 00:59:28.863036 | orchestrator | ...ignoring 2026-03-17 00:59:28.863044 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-17 00:59:28.863052 | orchestrator | ...ignoring 2026-03-17 00:59:28.863060 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-17 00:59:28.863068 | orchestrator | ...ignoring 2026-03-17 00:59:28.863075 | orchestrator | 2026-03-17 00:59:28.863083 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-17 00:59:28.863091 | orchestrator | Tuesday 17 March 2026 00:57:23 +0000 (0:00:10.862) 0:00:41.537 ********* 2026-03-17 00:59:28.863099 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:28.863107 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:28.863115 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:28.863123 | orchestrator | 2026-03-17 00:59:28.863130 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-17 00:59:28.863138 | orchestrator | Tuesday 17 March 2026 00:57:23 +0000 (0:00:00.432) 0:00:41.970 ********* 2026-03-17 00:59:28.863146 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:28.863154 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:28.863162 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:28.863169 | orchestrator | 2026-03-17 00:59:28.863177 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-17 00:59:28.863185 | orchestrator | Tuesday 17 March 2026 00:57:24 +0000 (0:00:00.599) 0:00:42.570 ********* 2026-03-17 00:59:28.863193 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:28.863201 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:28.863209 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:28.863217 | orchestrator | 2026-03-17 00:59:28.863225 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-17 00:59:28.863233 | orchestrator | Tuesday 17 March 2026 00:57:24 +0000 (0:00:00.396) 0:00:42.967 ********* 2026-03-17 00:59:28.863241 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:28.863249 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:28.863257 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:28.863264 | orchestrator | 2026-03-17 00:59:28.863272 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-17 00:59:28.863291 | orchestrator | Tuesday 17 March 2026 00:57:25 +0000 (0:00:00.426) 0:00:43.394 ********* 2026-03-17 00:59:28.863300 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:28.863307 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:28.863315 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:28.863323 | orchestrator | 2026-03-17 00:59:28.863331 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-17 00:59:28.863342 | orchestrator | Tuesday 17 March 2026 00:57:25 +0000 (0:00:00.351) 0:00:43.746 ********* 2026-03-17 00:59:28.863354 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:28.863367 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:28.863381 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:28.863390 | orchestrator | 2026-03-17 00:59:28.863402 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-17 00:59:28.863410 | orchestrator | Tuesday 17 March 2026 00:57:26 +0000 (0:00:00.523) 0:00:44.269 ********* 2026-03-17 00:59:28.863418 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:28.863426 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:28.863433 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-17 00:59:28.863441 | orchestrator | 2026-03-17 00:59:28.863449 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-17 00:59:28.863457 | orchestrator | Tuesday 17 March 2026 00:57:26 +0000 (0:00:00.342) 0:00:44.612 ********* 2026-03-17 00:59:28.863465 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:28.863473 | orchestrator | 2026-03-17 00:59:28.863481 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-17 00:59:28.863489 | orchestrator | Tuesday 17 March 2026 00:57:35 +0000 (0:00:08.948) 0:00:53.560 ********* 2026-03-17 00:59:28.863497 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:28.863504 | orchestrator | 2026-03-17 00:59:28.863512 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-17 00:59:28.863520 | orchestrator | Tuesday 17 March 2026 00:57:35 +0000 (0:00:00.098) 0:00:53.659 ********* 2026-03-17 00:59:28.863528 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:28.863536 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:28.863544 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:28.863552 | orchestrator | 2026-03-17 00:59:28.863560 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-17 00:59:28.863568 | orchestrator | Tuesday 17 March 2026 00:57:36 +0000 (0:00:00.829) 0:00:54.488 ********* 2026-03-17 00:59:28.863575 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:28.863583 | orchestrator | 2026-03-17 00:59:28.863591 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-17 00:59:28.863599 | orchestrator | Tuesday 17 March 2026 00:57:43 +0000 (0:00:07.505) 0:01:01.993 ********* 2026-03-17 00:59:28.863607 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:28.863615 | orchestrator | 2026-03-17 00:59:28.863623 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-17 00:59:28.863631 | orchestrator | Tuesday 17 March 2026 00:57:46 +0000 (0:00:02.618) 0:01:04.611 ********* 2026-03-17 00:59:28.863639 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:28.863647 | orchestrator | 2026-03-17 00:59:28.863655 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-17 00:59:28.863662 | orchestrator | Tuesday 17 March 2026 00:57:48 +0000 (0:00:02.178) 0:01:06.790 ********* 2026-03-17 00:59:28.863670 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:28.863678 | orchestrator | 2026-03-17 00:59:28.863686 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-17 00:59:28.863695 | orchestrator | Tuesday 17 March 2026 00:57:48 +0000 (0:00:00.105) 0:01:06.895 ********* 2026-03-17 00:59:28.863709 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:28.863722 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:28.863735 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:28.863748 | orchestrator | 2026-03-17 00:59:28.863772 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-17 00:59:28.863785 | orchestrator | Tuesday 17 March 2026 00:57:49 +0000 (0:00:00.339) 0:01:07.234 ********* 2026-03-17 00:59:28.863798 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:28.863810 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-17 00:59:28.863822 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:28.863835 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:28.863847 | orchestrator | 2026-03-17 00:59:28.863860 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-17 00:59:28.863873 | orchestrator | skipping: no hosts matched 2026-03-17 00:59:28.863886 | orchestrator | 2026-03-17 00:59:28.863960 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-17 00:59:28.863975 | orchestrator | 2026-03-17 00:59:28.863988 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-17 00:59:28.863997 | orchestrator | Tuesday 17 March 2026 00:57:50 +0000 (0:00:01.331) 0:01:08.566 ********* 2026-03-17 00:59:28.864005 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:59:28.864013 | orchestrator | 2026-03-17 00:59:28.864021 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-17 00:59:28.864029 | orchestrator | Tuesday 17 March 2026 00:58:10 +0000 (0:00:20.571) 0:01:29.137 ********* 2026-03-17 00:59:28.864037 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:28.864044 | orchestrator | 2026-03-17 00:59:28.864052 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-17 00:59:28.864061 | orchestrator | Tuesday 17 March 2026 00:58:21 +0000 (0:00:10.640) 0:01:39.778 ********* 2026-03-17 00:59:28.864069 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:28.864076 | orchestrator | 2026-03-17 00:59:28.864084 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-17 00:59:28.864092 | orchestrator | 2026-03-17 00:59:28.864100 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-17 00:59:28.864108 | orchestrator | Tuesday 17 March 2026 00:58:23 +0000 (0:00:02.129) 0:01:41.908 ********* 2026-03-17 00:59:28.864121 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:59:28.864140 | orchestrator | 2026-03-17 00:59:28.864153 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-17 00:59:28.864177 | orchestrator | Tuesday 17 March 2026 00:58:39 +0000 (0:00:15.693) 0:01:57.602 ********* 2026-03-17 00:59:28.864189 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:28.864202 | orchestrator | 2026-03-17 00:59:28.864213 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-17 00:59:28.864226 | orchestrator | Tuesday 17 March 2026 00:58:53 +0000 (0:00:14.507) 0:02:12.109 ********* 2026-03-17 00:59:28.864239 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:28.864252 | orchestrator | 2026-03-17 00:59:28.864265 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-17 00:59:28.864277 | orchestrator | 2026-03-17 00:59:28.864291 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-17 00:59:28.864313 | orchestrator | Tuesday 17 March 2026 00:58:56 +0000 (0:00:02.415) 0:02:14.524 ********* 2026-03-17 00:59:28.864322 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:28.864330 | orchestrator | 2026-03-17 00:59:28.864338 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-17 00:59:28.864346 | orchestrator | Tuesday 17 March 2026 00:59:11 +0000 (0:00:15.381) 0:02:29.906 ********* 2026-03-17 00:59:28.864353 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:28.864361 | orchestrator | 2026-03-17 00:59:28.864369 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-17 00:59:28.864377 | orchestrator | Tuesday 17 March 2026 00:59:12 +0000 (0:00:00.608) 0:02:30.515 ********* 2026-03-17 00:59:28.864385 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:28.864393 | orchestrator | 2026-03-17 00:59:28.864400 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-17 00:59:28.864417 | orchestrator | 2026-03-17 00:59:28.864440 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-17 00:59:28.864457 | orchestrator | Tuesday 17 March 2026 00:59:14 +0000 (0:00:02.466) 0:02:32.981 ********* 2026-03-17 00:59:28.864465 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:59:28.864473 | orchestrator | 2026-03-17 00:59:28.864480 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-17 00:59:28.864488 | orchestrator | Tuesday 17 March 2026 00:59:15 +0000 (0:00:00.522) 0:02:33.504 ********* 2026-03-17 00:59:28.864496 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:28.864504 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:28.864512 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:28.864520 | orchestrator | 2026-03-17 00:59:28.864527 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-17 00:59:28.864535 | orchestrator | Tuesday 17 March 2026 00:59:17 +0000 (0:00:01.999) 0:02:35.503 ********* 2026-03-17 00:59:28.864543 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:28.864551 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:28.864559 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:28.864566 | orchestrator | 2026-03-17 00:59:28.864574 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-17 00:59:28.864582 | orchestrator | Tuesday 17 March 2026 00:59:19 +0000 (0:00:01.956) 0:02:37.460 ********* 2026-03-17 00:59:28.864590 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:28.864597 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:28.864605 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:28.864613 | orchestrator | 2026-03-17 00:59:28.864621 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-17 00:59:28.864629 | orchestrator | Tuesday 17 March 2026 00:59:21 +0000 (0:00:02.091) 0:02:39.552 ********* 2026-03-17 00:59:28.864640 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:28.864653 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:28.864666 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:59:28.864679 | orchestrator | 2026-03-17 00:59:28.864692 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-17 00:59:28.864701 | orchestrator | Tuesday 17 March 2026 00:59:23 +0000 (0:00:02.372) 0:02:41.924 ********* 2026-03-17 00:59:28.864709 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:59:28.864717 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:59:28.864724 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:59:28.864754 | orchestrator | 2026-03-17 00:59:28.864763 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-17 00:59:28.864771 | orchestrator | Tuesday 17 March 2026 00:59:26 +0000 (0:00:02.900) 0:02:44.825 ********* 2026-03-17 00:59:28.864778 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:59:28.864786 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:59:28.864794 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:59:28.864983 | orchestrator | 2026-03-17 00:59:28.865006 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:59:28.865021 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-17 00:59:28.865033 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-03-17 00:59:28.865057 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-17 00:59:28.865070 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-17 00:59:28.865084 | orchestrator | 2026-03-17 00:59:28.865096 | orchestrator | 2026-03-17 00:59:28.865109 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:59:28.865134 | orchestrator | Tuesday 17 March 2026 00:59:26 +0000 (0:00:00.187) 0:02:45.013 ********* 2026-03-17 00:59:28.865147 | orchestrator | =============================================================================== 2026-03-17 00:59:28.865162 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 36.27s 2026-03-17 00:59:28.865174 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 25.15s 2026-03-17 00:59:28.865198 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 15.38s 2026-03-17 00:59:28.865211 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.86s 2026-03-17 00:59:28.865222 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 8.95s 2026-03-17 00:59:28.865235 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.51s 2026-03-17 00:59:28.865248 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.55s 2026-03-17 00:59:28.865261 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.82s 2026-03-17 00:59:28.865288 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.25s 2026-03-17 00:59:28.865302 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.04s 2026-03-17 00:59:28.865314 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.01s 2026-03-17 00:59:28.865328 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.92s 2026-03-17 00:59:28.865343 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.90s 2026-03-17 00:59:28.865356 | orchestrator | Check MariaDB service --------------------------------------------------- 2.73s 2026-03-17 00:59:28.865370 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 2.62s 2026-03-17 00:59:28.865383 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.47s 2026-03-17 00:59:28.865396 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.46s 2026-03-17 00:59:28.865409 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.41s 2026-03-17 00:59:28.865424 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.37s 2026-03-17 00:59:28.865434 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.18s 2026-03-17 00:59:28.865443 | orchestrator | 2026-03-17 00:59:28 | INFO  | Task 68b73dd8-434c-462d-80d2-5e611fa6789c is in state STARTED 2026-03-17 00:59:28.865452 | orchestrator | 2026-03-17 00:59:28 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 00:59:28.865460 | orchestrator | 2026-03-17 00:59:28 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:31.908201 | orchestrator | 2026-03-17 00:59:31 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 00:59:31.908450 | orchestrator | 2026-03-17 00:59:31 | INFO  | Task 68b73dd8-434c-462d-80d2-5e611fa6789c is in state STARTED 2026-03-17 00:59:31.909385 | orchestrator | 2026-03-17 00:59:31 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 00:59:31.909416 | orchestrator | 2026-03-17 00:59:31 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:34.952347 | orchestrator | 2026-03-17 00:59:34 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 00:59:34.953263 | orchestrator | 2026-03-17 00:59:34 | INFO  | Task 68b73dd8-434c-462d-80d2-5e611fa6789c is in state STARTED 2026-03-17 00:59:34.953998 | orchestrator | 2026-03-17 00:59:34 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 00:59:34.954074 | orchestrator | 2026-03-17 00:59:34 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:37.994741 | orchestrator | 2026-03-17 00:59:37 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 00:59:37.994842 | orchestrator | 2026-03-17 00:59:37 | INFO  | Task 68b73dd8-434c-462d-80d2-5e611fa6789c is in state STARTED 2026-03-17 00:59:37.996273 | orchestrator | 2026-03-17 00:59:37 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 00:59:37.996309 | orchestrator | 2026-03-17 00:59:37 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:41.036562 | orchestrator | 2026-03-17 00:59:41 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 00:59:41.037805 | orchestrator | 2026-03-17 00:59:41 | INFO  | Task 68b73dd8-434c-462d-80d2-5e611fa6789c is in state STARTED 2026-03-17 00:59:41.039920 | orchestrator | 2026-03-17 00:59:41 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 00:59:41.040311 | orchestrator | 2026-03-17 00:59:41 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:44.071826 | orchestrator | 2026-03-17 00:59:44 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 00:59:44.072493 | orchestrator | 2026-03-17 00:59:44 | INFO  | Task 68b73dd8-434c-462d-80d2-5e611fa6789c is in state STARTED 2026-03-17 00:59:44.073424 | orchestrator | 2026-03-17 00:59:44 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 00:59:44.073480 | orchestrator | 2026-03-17 00:59:44 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:47.115473 | orchestrator | 2026-03-17 00:59:47 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 00:59:47.115561 | orchestrator | 2026-03-17 00:59:47 | INFO  | Task 68b73dd8-434c-462d-80d2-5e611fa6789c is in state STARTED 2026-03-17 00:59:47.116129 | orchestrator | 2026-03-17 00:59:47 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 00:59:47.116168 | orchestrator | 2026-03-17 00:59:47 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:50.154767 | orchestrator | 2026-03-17 00:59:50 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 00:59:50.156923 | orchestrator | 2026-03-17 00:59:50 | INFO  | Task 68b73dd8-434c-462d-80d2-5e611fa6789c is in state STARTED 2026-03-17 00:59:50.157001 | orchestrator | 2026-03-17 00:59:50 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 00:59:50.157010 | orchestrator | 2026-03-17 00:59:50 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:53.193548 | orchestrator | 2026-03-17 00:59:53 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 00:59:53.194428 | orchestrator | 2026-03-17 00:59:53 | INFO  | Task 68b73dd8-434c-462d-80d2-5e611fa6789c is in state STARTED 2026-03-17 00:59:53.196462 | orchestrator | 2026-03-17 00:59:53 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 00:59:53.196532 | orchestrator | 2026-03-17 00:59:53 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:56.231275 | orchestrator | 2026-03-17 00:59:56 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 00:59:56.231544 | orchestrator | 2026-03-17 00:59:56 | INFO  | Task 68b73dd8-434c-462d-80d2-5e611fa6789c is in state STARTED 2026-03-17 00:59:56.232319 | orchestrator | 2026-03-17 00:59:56 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 00:59:56.232348 | orchestrator | 2026-03-17 00:59:56 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:59.276266 | orchestrator | 2026-03-17 00:59:59 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 00:59:59.276394 | orchestrator | 2026-03-17 00:59:59 | INFO  | Task 68b73dd8-434c-462d-80d2-5e611fa6789c is in state STARTED 2026-03-17 00:59:59.277589 | orchestrator | 2026-03-17 00:59:59 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 00:59:59.277632 | orchestrator | 2026-03-17 00:59:59 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:02.319403 | orchestrator | 2026-03-17 01:00:02 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 01:00:02.319464 | orchestrator | 2026-03-17 01:00:02 | INFO  | Task 68b73dd8-434c-462d-80d2-5e611fa6789c is in state STARTED 2026-03-17 01:00:02.320388 | orchestrator | 2026-03-17 01:00:02 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:00:02.320434 | orchestrator | 2026-03-17 01:00:02 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:05.359530 | orchestrator | 2026-03-17 01:00:05 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 01:00:05.361572 | orchestrator | 2026-03-17 01:00:05 | INFO  | Task 68b73dd8-434c-462d-80d2-5e611fa6789c is in state STARTED 2026-03-17 01:00:05.363727 | orchestrator | 2026-03-17 01:00:05 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:00:05.363771 | orchestrator | 2026-03-17 01:00:05 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:08.400367 | orchestrator | 2026-03-17 01:00:08 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 01:00:08.402050 | orchestrator | 2026-03-17 01:00:08 | INFO  | Task 68b73dd8-434c-462d-80d2-5e611fa6789c is in state STARTED 2026-03-17 01:00:08.404589 | orchestrator | 2026-03-17 01:00:08 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:00:08.404623 | orchestrator | 2026-03-17 01:00:08 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:11.445683 | orchestrator | 2026-03-17 01:00:11 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 01:00:11.446633 | orchestrator | 2026-03-17 01:00:11 | INFO  | Task 68b73dd8-434c-462d-80d2-5e611fa6789c is in state STARTED 2026-03-17 01:00:11.448212 | orchestrator | 2026-03-17 01:00:11 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:00:11.448348 | orchestrator | 2026-03-17 01:00:11 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:14.494650 | orchestrator | 2026-03-17 01:00:14 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 01:00:14.496035 | orchestrator | 2026-03-17 01:00:14 | INFO  | Task 68b73dd8-434c-462d-80d2-5e611fa6789c is in state STARTED 2026-03-17 01:00:14.498162 | orchestrator | 2026-03-17 01:00:14 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:00:14.498197 | orchestrator | 2026-03-17 01:00:14 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:17.557064 | orchestrator | 2026-03-17 01:00:17 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 01:00:17.560125 | orchestrator | 2026-03-17 01:00:17 | INFO  | Task 68b73dd8-434c-462d-80d2-5e611fa6789c is in state STARTED 2026-03-17 01:00:17.562108 | orchestrator | 2026-03-17 01:00:17 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:00:17.562162 | orchestrator | 2026-03-17 01:00:17 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:20.598531 | orchestrator | 2026-03-17 01:00:20 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 01:00:20.599880 | orchestrator | 2026-03-17 01:00:20 | INFO  | Task 68b73dd8-434c-462d-80d2-5e611fa6789c is in state STARTED 2026-03-17 01:00:20.601731 | orchestrator | 2026-03-17 01:00:20 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:00:20.602194 | orchestrator | 2026-03-17 01:00:20 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:23.649811 | orchestrator | 2026-03-17 01:00:23 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 01:00:23.651382 | orchestrator | 2026-03-17 01:00:23 | INFO  | Task 68b73dd8-434c-462d-80d2-5e611fa6789c is in state STARTED 2026-03-17 01:00:23.652962 | orchestrator | 2026-03-17 01:00:23 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:00:23.652991 | orchestrator | 2026-03-17 01:00:23 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:26.699890 | orchestrator | 2026-03-17 01:00:26 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 01:00:26.699968 | orchestrator | 2026-03-17 01:00:26 | INFO  | Task 68b73dd8-434c-462d-80d2-5e611fa6789c is in state STARTED 2026-03-17 01:00:26.702859 | orchestrator | 2026-03-17 01:00:26 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:00:26.703428 | orchestrator | 2026-03-17 01:00:26 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:29.749637 | orchestrator | 2026-03-17 01:00:29 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 01:00:29.751074 | orchestrator | 2026-03-17 01:00:29 | INFO  | Task 68b73dd8-434c-462d-80d2-5e611fa6789c is in state STARTED 2026-03-17 01:00:29.753436 | orchestrator | 2026-03-17 01:00:29 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:00:29.753608 | orchestrator | 2026-03-17 01:00:29 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:32.791786 | orchestrator | 2026-03-17 01:00:32 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 01:00:32.792542 | orchestrator | 2026-03-17 01:00:32 | INFO  | Task 68b73dd8-434c-462d-80d2-5e611fa6789c is in state STARTED 2026-03-17 01:00:32.794451 | orchestrator | 2026-03-17 01:00:32 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:00:32.794625 | orchestrator | 2026-03-17 01:00:32 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:35.834317 | orchestrator | 2026-03-17 01:00:35 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 01:00:35.836334 | orchestrator | 2026-03-17 01:00:35 | INFO  | Task 68b73dd8-434c-462d-80d2-5e611fa6789c is in state STARTED 2026-03-17 01:00:35.838526 | orchestrator | 2026-03-17 01:00:35 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:00:35.838591 | orchestrator | 2026-03-17 01:00:35 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:38.880653 | orchestrator | 2026-03-17 01:00:38 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 01:00:38.882195 | orchestrator | 2026-03-17 01:00:38 | INFO  | Task 68b73dd8-434c-462d-80d2-5e611fa6789c is in state STARTED 2026-03-17 01:00:38.883304 | orchestrator | 2026-03-17 01:00:38 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:00:38.883582 | orchestrator | 2026-03-17 01:00:38 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:41.929286 | orchestrator | 2026-03-17 01:00:41 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 01:00:41.930679 | orchestrator | 2026-03-17 01:00:41 | INFO  | Task 68b73dd8-434c-462d-80d2-5e611fa6789c is in state STARTED 2026-03-17 01:00:41.932460 | orchestrator | 2026-03-17 01:00:41 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:00:41.932587 | orchestrator | 2026-03-17 01:00:41 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:44.978043 | orchestrator | 2026-03-17 01:00:44 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 01:00:44.979602 | orchestrator | 2026-03-17 01:00:44 | INFO  | Task 68b73dd8-434c-462d-80d2-5e611fa6789c is in state STARTED 2026-03-17 01:00:44.981133 | orchestrator | 2026-03-17 01:00:44 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:00:44.981161 | orchestrator | 2026-03-17 01:00:44 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:48.015952 | orchestrator | 2026-03-17 01:00:48 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state STARTED 2026-03-17 01:00:48.017602 | orchestrator | 2026-03-17 01:00:48 | INFO  | Task 68b73dd8-434c-462d-80d2-5e611fa6789c is in state STARTED 2026-03-17 01:00:48.018887 | orchestrator | 2026-03-17 01:00:48 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:00:48.019106 | orchestrator | 2026-03-17 01:00:48 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:51.065446 | orchestrator | 2026-03-17 01:00:51 | INFO  | Task e28bc890-1972-42fd-81c1-0f5ac48822d5 is in state SUCCESS 2026-03-17 01:00:51.066893 | orchestrator | 2026-03-17 01:00:51.066964 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-17 01:00:51.066970 | orchestrator | 2.16.14 2026-03-17 01:00:51.066974 | orchestrator | 2026-03-17 01:00:51.066978 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-17 01:00:51.066982 | orchestrator | 2026-03-17 01:00:51.066986 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-17 01:00:51.066990 | orchestrator | Tuesday 17 March 2026 00:58:43 +0000 (0:00:00.544) 0:00:00.544 ********* 2026-03-17 01:00:51.066994 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:00:51.066999 | orchestrator | 2026-03-17 01:00:51.067003 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-17 01:00:51.067006 | orchestrator | Tuesday 17 March 2026 00:58:44 +0000 (0:00:00.613) 0:00:01.158 ********* 2026-03-17 01:00:51.067010 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:00:51.067016 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:00:51.067022 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:00:51.067028 | orchestrator | 2026-03-17 01:00:51.067035 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-17 01:00:51.067041 | orchestrator | Tuesday 17 March 2026 00:58:44 +0000 (0:00:00.575) 0:00:01.733 ********* 2026-03-17 01:00:51.067047 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:00:51.067053 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:00:51.067058 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:00:51.067064 | orchestrator | 2026-03-17 01:00:51.067070 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-17 01:00:51.067076 | orchestrator | Tuesday 17 March 2026 00:58:44 +0000 (0:00:00.278) 0:00:02.011 ********* 2026-03-17 01:00:51.067082 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:00:51.067088 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:00:51.067095 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:00:51.067101 | orchestrator | 2026-03-17 01:00:51.067108 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-17 01:00:51.067114 | orchestrator | Tuesday 17 March 2026 00:58:45 +0000 (0:00:00.808) 0:00:02.819 ********* 2026-03-17 01:00:51.067118 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:00:51.067122 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:00:51.067125 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:00:51.067129 | orchestrator | 2026-03-17 01:00:51.067133 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-17 01:00:51.067150 | orchestrator | Tuesday 17 March 2026 00:58:46 +0000 (0:00:00.291) 0:00:03.111 ********* 2026-03-17 01:00:51.067155 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:00:51.067158 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:00:51.067162 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:00:51.067166 | orchestrator | 2026-03-17 01:00:51.067170 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-17 01:00:51.067174 | orchestrator | Tuesday 17 March 2026 00:58:46 +0000 (0:00:00.277) 0:00:03.389 ********* 2026-03-17 01:00:51.067178 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:00:51.067181 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:00:51.067185 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:00:51.067189 | orchestrator | 2026-03-17 01:00:51.067193 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-17 01:00:51.067196 | orchestrator | Tuesday 17 March 2026 00:58:46 +0000 (0:00:00.293) 0:00:03.682 ********* 2026-03-17 01:00:51.067200 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:00:51.067236 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:00:51.067241 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:00:51.067244 | orchestrator | 2026-03-17 01:00:51.067248 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-17 01:00:51.067252 | orchestrator | Tuesday 17 March 2026 00:58:47 +0000 (0:00:00.464) 0:00:04.146 ********* 2026-03-17 01:00:51.067256 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:00:51.067260 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:00:51.067263 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:00:51.067267 | orchestrator | 2026-03-17 01:00:51.067271 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-17 01:00:51.067275 | orchestrator | Tuesday 17 March 2026 00:58:47 +0000 (0:00:00.269) 0:00:04.415 ********* 2026-03-17 01:00:51.067279 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-17 01:00:51.067282 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-17 01:00:51.067286 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-17 01:00:51.067290 | orchestrator | 2026-03-17 01:00:51.067294 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-17 01:00:51.067305 | orchestrator | Tuesday 17 March 2026 00:58:47 +0000 (0:00:00.612) 0:00:05.028 ********* 2026-03-17 01:00:51.067312 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:00:51.067318 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:00:51.067324 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:00:51.067329 | orchestrator | 2026-03-17 01:00:51.067335 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-17 01:00:51.067341 | orchestrator | Tuesday 17 March 2026 00:58:48 +0000 (0:00:00.418) 0:00:05.446 ********* 2026-03-17 01:00:51.067345 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-17 01:00:51.067349 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-17 01:00:51.067353 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-17 01:00:51.067357 | orchestrator | 2026-03-17 01:00:51.067360 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-17 01:00:51.067364 | orchestrator | Tuesday 17 March 2026 00:58:50 +0000 (0:00:02.312) 0:00:07.759 ********* 2026-03-17 01:00:51.067368 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-17 01:00:51.067372 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-17 01:00:51.067375 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-17 01:00:51.067379 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:00:51.067383 | orchestrator | 2026-03-17 01:00:51.067516 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-17 01:00:51.067526 | orchestrator | Tuesday 17 March 2026 00:58:51 +0000 (0:00:00.620) 0:00:08.379 ********* 2026-03-17 01:00:51.067537 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.067543 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.067547 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.067551 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:00:51.067555 | orchestrator | 2026-03-17 01:00:51.067558 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-17 01:00:51.067562 | orchestrator | Tuesday 17 March 2026 00:58:52 +0000 (0:00:00.804) 0:00:09.184 ********* 2026-03-17 01:00:51.067567 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.067573 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.067577 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.067581 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:00:51.067585 | orchestrator | 2026-03-17 01:00:51.067589 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-17 01:00:51.067592 | orchestrator | Tuesday 17 March 2026 00:58:52 +0000 (0:00:00.336) 0:00:09.520 ********* 2026-03-17 01:00:51.067600 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '9e9f78ec1fb9', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-17 00:58:49.114448', 'end': '2026-03-17 00:58:49.146492', 'delta': '0:00:00.032044', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9e9f78ec1fb9'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-17 01:00:51.067607 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'b6c7adc72088', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-17 00:58:49.838264', 'end': '2026-03-17 00:58:49.875109', 'delta': '0:00:00.036845', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b6c7adc72088'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-17 01:00:51.067619 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '2132e31f4908', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-17 00:58:50.493335', 'end': '2026-03-17 00:58:50.565784', 'delta': '0:00:00.072449', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2132e31f4908'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-17 01:00:51.067624 | orchestrator | 2026-03-17 01:00:51.067627 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-17 01:00:51.067631 | orchestrator | Tuesday 17 March 2026 00:58:52 +0000 (0:00:00.187) 0:00:09.707 ********* 2026-03-17 01:00:51.067635 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:00:51.067639 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:00:51.067643 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:00:51.067646 | orchestrator | 2026-03-17 01:00:51.067691 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-17 01:00:51.067695 | orchestrator | Tuesday 17 March 2026 00:58:53 +0000 (0:00:00.412) 0:00:10.120 ********* 2026-03-17 01:00:51.067699 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-17 01:00:51.067703 | orchestrator | 2026-03-17 01:00:51.067707 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-17 01:00:51.067897 | orchestrator | Tuesday 17 March 2026 00:58:54 +0000 (0:00:01.856) 0:00:11.976 ********* 2026-03-17 01:00:51.067907 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:00:51.067911 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:00:51.067915 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:00:51.067919 | orchestrator | 2026-03-17 01:00:51.067922 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-17 01:00:51.067926 | orchestrator | Tuesday 17 March 2026 00:58:55 +0000 (0:00:00.302) 0:00:12.279 ********* 2026-03-17 01:00:51.067930 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:00:51.067934 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:00:51.067938 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:00:51.067941 | orchestrator | 2026-03-17 01:00:51.067945 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-17 01:00:51.067949 | orchestrator | Tuesday 17 March 2026 00:58:55 +0000 (0:00:00.407) 0:00:12.686 ********* 2026-03-17 01:00:51.067953 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:00:51.067956 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:00:51.067960 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:00:51.067964 | orchestrator | 2026-03-17 01:00:51.067968 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-17 01:00:51.067971 | orchestrator | Tuesday 17 March 2026 00:58:56 +0000 (0:00:00.446) 0:00:13.132 ********* 2026-03-17 01:00:51.067975 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:00:51.067979 | orchestrator | 2026-03-17 01:00:51.067982 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-17 01:00:51.067986 | orchestrator | Tuesday 17 March 2026 00:58:56 +0000 (0:00:00.113) 0:00:13.245 ********* 2026-03-17 01:00:51.067990 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:00:51.067994 | orchestrator | 2026-03-17 01:00:51.067998 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-17 01:00:51.068001 | orchestrator | Tuesday 17 March 2026 00:58:56 +0000 (0:00:00.220) 0:00:13.466 ********* 2026-03-17 01:00:51.068010 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:00:51.068013 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:00:51.068017 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:00:51.068021 | orchestrator | 2026-03-17 01:00:51.068025 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-17 01:00:51.068028 | orchestrator | Tuesday 17 March 2026 00:58:56 +0000 (0:00:00.277) 0:00:13.743 ********* 2026-03-17 01:00:51.068032 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:00:51.068036 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:00:51.068040 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:00:51.068043 | orchestrator | 2026-03-17 01:00:51.068047 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-17 01:00:51.068054 | orchestrator | Tuesday 17 March 2026 00:58:56 +0000 (0:00:00.262) 0:00:14.005 ********* 2026-03-17 01:00:51.068058 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:00:51.068062 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:00:51.068066 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:00:51.068070 | orchestrator | 2026-03-17 01:00:51.068073 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-17 01:00:51.068077 | orchestrator | Tuesday 17 March 2026 00:58:57 +0000 (0:00:00.421) 0:00:14.427 ********* 2026-03-17 01:00:51.068081 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:00:51.068085 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:00:51.068088 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:00:51.068093 | orchestrator | 2026-03-17 01:00:51.068099 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-17 01:00:51.068106 | orchestrator | Tuesday 17 March 2026 00:58:57 +0000 (0:00:00.315) 0:00:14.742 ********* 2026-03-17 01:00:51.068111 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:00:51.068117 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:00:51.068124 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:00:51.068130 | orchestrator | 2026-03-17 01:00:51.068136 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-17 01:00:51.068142 | orchestrator | Tuesday 17 March 2026 00:58:57 +0000 (0:00:00.301) 0:00:15.043 ********* 2026-03-17 01:00:51.068149 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:00:51.068155 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:00:51.068162 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:00:51.068201 | orchestrator | 2026-03-17 01:00:51.068209 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-17 01:00:51.068216 | orchestrator | Tuesday 17 March 2026 00:58:58 +0000 (0:00:00.290) 0:00:15.333 ********* 2026-03-17 01:00:51.068222 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:00:51.068229 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:00:51.068236 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:00:51.068242 | orchestrator | 2026-03-17 01:00:51.068249 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-17 01:00:51.068255 | orchestrator | Tuesday 17 March 2026 00:58:58 +0000 (0:00:00.380) 0:00:15.713 ********* 2026-03-17 01:00:51.068263 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b48309d9--c226--530e--bc23--6e205cf9651b-osd--block--b48309d9--c226--530e--bc23--6e205cf9651b', 'dm-uuid-LVM-JRKlP6LIzKroJwI7cwJekUmidQP1dkkc10P6t7SNbt0Fuu0dM1f0yCQj7KuABZzu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 01:00:51.068271 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6efa8bf7--29bf--52cd--bcf0--0c94ef95f07f-osd--block--6efa8bf7--29bf--52cd--bcf0--0c94ef95f07f', 'dm-uuid-LVM-FTXPw6vvhD2ctiRDXpkTucTstUSMnhZjMX8frOXeKo9sMioVcXsDXqozTvTId0Xd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 01:00:51.068283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:00:51.068292 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:00:51.068299 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:00:51.068319 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:00:51.068326 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:00:51.068346 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:00:51.068351 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:00:51.068355 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:00:51.068361 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069', 'scsi-SQEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069-part1', 'scsi-SQEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069-part14', 'scsi-SQEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069-part15', 'scsi-SQEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069-part16', 'scsi-SQEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:00:51.068372 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--13f697f5--12ba--5526--98d1--b1a9c265f800-osd--block--13f697f5--12ba--5526--98d1--b1a9c265f800', 'dm-uuid-LVM-ydCXoqPtK5pYOVor0N8MzRweku90f1HZVD2GP5etIYpm9MAS1EJkDslBAem20cjJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 01:00:51.068388 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b48309d9--c226--530e--bc23--6e205cf9651b-osd--block--b48309d9--c226--530e--bc23--6e205cf9651b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DUgk5R-vUG2-TrLu-eqkb-PG88-nP5c-anwxd8', 'scsi-0QEMU_QEMU_HARDDISK_e46b8678-1baa-4ba8-a612-904460f97320', 'scsi-SQEMU_QEMU_HARDDISK_e46b8678-1baa-4ba8-a612-904460f97320'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:00:51.068393 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a0cc3c10--edeb--5a7b--849a--4273befffbf6-osd--block--a0cc3c10--edeb--5a7b--849a--4273befffbf6', 'dm-uuid-LVM-9qSBwfie3LEVyt9oLHcz7QNTZZPm9GLrQmSddtKIdhKAciSgHjqYZqMg3K9caQlF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 01:00:51.068400 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6efa8bf7--29bf--52cd--bcf0--0c94ef95f07f-osd--block--6efa8bf7--29bf--52cd--bcf0--0c94ef95f07f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JPxT8G-FQnz-R6eK-ccbB-f3TT-SWfh-BaDf8g', 'scsi-0QEMU_QEMU_HARDDISK_f95d5766-a3db-4d15-9977-785c02a190f5', 'scsi-SQEMU_QEMU_HARDDISK_f95d5766-a3db-4d15-9977-785c02a190f5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:00:51.068404 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2854fd14-3e82-4dcb-865e-ef6e028a2c86', 'scsi-SQEMU_QEMU_HARDDISK_2854fd14-3e82-4dcb-865e-ef6e028a2c86'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:00:51.068408 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:00:51.068419 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:00:51.068423 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:00:51.068437 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:00:51.068442 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:00:51.068446 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:00:51.068450 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:00:51.068457 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:00:51.068461 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:00:51.068464 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:00:51.068473 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35', 'scsi-SQEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35-part1', 'scsi-SQEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35-part14', 'scsi-SQEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35-part15', 'scsi-SQEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35-part16', 'scsi-SQEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:00:51.068480 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--13f697f5--12ba--5526--98d1--b1a9c265f800-osd--block--13f697f5--12ba--5526--98d1--b1a9c265f800'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QLf3du-gcpq-ZiGI-Yp2L-1BCI-i7t9-Fa9c2U', 'scsi-0QEMU_QEMU_HARDDISK_9ec754d5-296d-4a8a-b6d8-e4830272a171', 'scsi-SQEMU_QEMU_HARDDISK_9ec754d5-296d-4a8a-b6d8-e4830272a171'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:00:51.068492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a0cc3c10--edeb--5a7b--849a--4273befffbf6-osd--block--a0cc3c10--edeb--5a7b--849a--4273befffbf6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZNW1i7-xCmL-GJs5-RydD-2txE-hRH3-ixXHNA', 'scsi-0QEMU_QEMU_HARDDISK_d8ebe49d-b73b-4490-897b-f13bdc67f86d', 'scsi-SQEMU_QEMU_HARDDISK_d8ebe49d-b73b-4490-897b-f13bdc67f86d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:00:51.068499 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6d2c3af9--2510--58af--8cf3--0edda6a2b7a0-osd--block--6d2c3af9--2510--58af--8cf3--0edda6a2b7a0', 'dm-uuid-LVM-zrdpKXOcNezBtRtPQoFzCeCrhDD0O4ZsOCdIwGhFUEHdJo0GU6yDutRDUzO0a7XH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 01:00:51.068506 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91ef76e-9f0f-49ef-bc09-7b70daad6579', 'scsi-SQEMU_QEMU_HARDDISK_f91ef76e-9f0f-49ef-bc09-7b70daad6579'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:00:51.068515 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bc85b6b7--69fe--55db--81a6--3a78775dfc6c-osd--block--bc85b6b7--69fe--55db--81a6--3a78775dfc6c', 'dm-uuid-LVM-ryaTqHhsmATbIQsNQD2CO8W4Nnz0nYQi2hefVaE1oS6srXboYXRExhEIzPlafiha'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 01:00:51.068527 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:00:51.068534 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:00:51.068545 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:00:51.068552 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:00:51.068558 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:00:51.068562 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:00:51.068567 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:00:51.068572 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:00:51.068579 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:00:51.068584 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:00:51.068593 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f', 'scsi-SQEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f-part1', 'scsi-SQEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f-part14', 'scsi-SQEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f-part15', 'scsi-SQEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f-part16', 'scsi-SQEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:00:51.068601 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6d2c3af9--2510--58af--8cf3--0edda6a2b7a0-osd--block--6d2c3af9--2510--58af--8cf3--0edda6a2b7a0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oHcqGJ-S8Q8-sg2L-oLvt-4xzV-a0Yy-FcYNsg', 'scsi-0QEMU_QEMU_HARDDISK_a7deaf5a-cd70-43cd-92ab-ee3441c5e54f', 'scsi-SQEMU_QEMU_HARDDISK_a7deaf5a-cd70-43cd-92ab-ee3441c5e54f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:00:51.068608 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--bc85b6b7--69fe--55db--81a6--3a78775dfc6c-osd--block--bc85b6b7--69fe--55db--81a6--3a78775dfc6c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9jeWCi-9DLp-UlhN-eHDh-lDvy-Uc3o-jpevWg', 'scsi-0QEMU_QEMU_HARDDISK_dd7becb9-0584-4efc-8944-d51272ed61fa', 'scsi-SQEMU_QEMU_HARDDISK_dd7becb9-0584-4efc-8944-d51272ed61fa'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:00:51.068613 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0a90ba68-315a-4ce4-a803-8ffceb4dacc1', 'scsi-SQEMU_QEMU_HARDDISK_0a90ba68-315a-4ce4-a803-8ffceb4dacc1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:00:51.068621 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:00:51.068628 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:00:51.068633 | orchestrator | 2026-03-17 01:00:51.068637 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-17 01:00:51.068642 | orchestrator | Tuesday 17 March 2026 00:58:59 +0000 (0:00:00.470) 0:00:16.184 ********* 2026-03-17 01:00:51.068646 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b48309d9--c226--530e--bc23--6e205cf9651b-osd--block--b48309d9--c226--530e--bc23--6e205cf9651b', 'dm-uuid-LVM-JRKlP6LIzKroJwI7cwJekUmidQP1dkkc10P6t7SNbt0Fuu0dM1f0yCQj7KuABZzu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068651 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6efa8bf7--29bf--52cd--bcf0--0c94ef95f07f-osd--block--6efa8bf7--29bf--52cd--bcf0--0c94ef95f07f', 'dm-uuid-LVM-FTXPw6vvhD2ctiRDXpkTucTstUSMnhZjMX8frOXeKo9sMioVcXsDXqozTvTId0Xd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068656 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068663 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068668 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068678 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068683 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--13f697f5--12ba--5526--98d1--b1a9c265f800-osd--block--13f697f5--12ba--5526--98d1--b1a9c265f800', 'dm-uuid-LVM-ydCXoqPtK5pYOVor0N8MzRweku90f1HZVD2GP5etIYpm9MAS1EJkDslBAem20cjJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068688 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068692 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068699 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a0cc3c10--edeb--5a7b--849a--4273befffbf6-osd--block--a0cc3c10--edeb--5a7b--849a--4273befffbf6', 'dm-uuid-LVM-9qSBwfie3LEVyt9oLHcz7QNTZZPm9GLrQmSddtKIdhKAciSgHjqYZqMg3K9caQlF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068704 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068715 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068719 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068724 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068732 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069', 'scsi-SQEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069-part1', 'scsi-SQEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069-part14', 'scsi-SQEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069-part15', 'scsi-SQEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069-part16', 'scsi-SQEMU_QEMU_HARDDISK_15a4589a-55c0-4383-a3c8-a64ced338069-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068745 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b48309d9--c226--530e--bc23--6e205cf9651b-osd--block--b48309d9--c226--530e--bc23--6e205cf9651b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DUgk5R-vUG2-TrLu-eqkb-PG88-nP5c-anwxd8', 'scsi-0QEMU_QEMU_HARDDISK_e46b8678-1baa-4ba8-a612-904460f97320', 'scsi-SQEMU_QEMU_HARDDISK_e46b8678-1baa-4ba8-a612-904460f97320'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068750 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068755 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6efa8bf7--29bf--52cd--bcf0--0c94ef95f07f-osd--block--6efa8bf7--29bf--52cd--bcf0--0c94ef95f07f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JPxT8G-FQnz-R6eK-ccbB-f3TT-SWfh-BaDf8g', 'scsi-0QEMU_QEMU_HARDDISK_f95d5766-a3db-4d15-9977-785c02a190f5', 'scsi-SQEMU_QEMU_HARDDISK_f95d5766-a3db-4d15-9977-785c02a190f5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068759 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068766 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2854fd14-3e82-4dcb-865e-ef6e028a2c86', 'scsi-SQEMU_QEMU_HARDDISK_2854fd14-3e82-4dcb-865e-ef6e028a2c86'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068776 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068781 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068785 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:00:51.068790 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068796 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6d2c3af9--2510--58af--8cf3--0edda6a2b7a0-osd--block--6d2c3af9--2510--58af--8cf3--0edda6a2b7a0', 'dm-uuid-LVM-zrdpKXOcNezBtRtPQoFzCeCrhDD0O4ZsOCdIwGhFUEHdJo0GU6yDutRDUzO0a7XH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068803 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068809 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bc85b6b7--69fe--55db--81a6--3a78775dfc6c-osd--block--bc85b6b7--69fe--55db--81a6--3a78775dfc6c', 'dm-uuid-LVM-ryaTqHhsmATbIQsNQD2CO8W4Nnz0nYQi2hefVaE1oS6srXboYXRExhEIzPlafiha'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068838 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068846 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068854 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068865 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35', 'scsi-SQEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35-part1', 'scsi-SQEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35-part14', 'scsi-SQEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35-part15', 'scsi-SQEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35-part16', 'scsi-SQEMU_QEMU_HARDDISK_1121225f-1607-435d-bcbb-f933b6d22b35-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068880 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068888 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--13f697f5--12ba--5526--98d1--b1a9c265f800-osd--block--13f697f5--12ba--5526--98d1--b1a9c265f800'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QLf3du-gcpq-ZiGI-Yp2L-1BCI-i7t9-Fa9c2U', 'scsi-0QEMU_QEMU_HARDDISK_9ec754d5-296d-4a8a-b6d8-e4830272a171', 'scsi-SQEMU_QEMU_HARDDISK_9ec754d5-296d-4a8a-b6d8-e4830272a171'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068895 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a0cc3c10--edeb--5a7b--849a--4273befffbf6-osd--block--a0cc3c10--edeb--5a7b--849a--4273befffbf6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZNW1i7-xCmL-GJs5-RydD-2txE-hRH3-ixXHNA', 'scsi-0QEMU_QEMU_HARDDISK_d8ebe49d-b73b-4490-897b-f13bdc67f86d', 'scsi-SQEMU_QEMU_HARDDISK_d8ebe49d-b73b-4490-897b-f13bdc67f86d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068900 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068907 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f91ef76e-9f0f-49ef-bc09-7b70daad6579', 'scsi-SQEMU_QEMU_HARDDISK_f91ef76e-9f0f-49ef-bc09-7b70daad6579'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068922 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068928 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068939 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:00:51.068943 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068947 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068951 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068964 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f', 'scsi-SQEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f-part1', 'scsi-SQEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f-part14', 'scsi-SQEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f-part15', 'scsi-SQEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f-part16', 'scsi-SQEMU_QEMU_HARDDISK_b1d77269-ad7c-4f8a-934d-5b47c43e3d9f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068969 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6d2c3af9--2510--58af--8cf3--0edda6a2b7a0-osd--block--6d2c3af9--2510--58af--8cf3--0edda6a2b7a0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oHcqGJ-S8Q8-sg2L-oLvt-4xzV-a0Yy-FcYNsg', 'scsi-0QEMU_QEMU_HARDDISK_a7deaf5a-cd70-43cd-92ab-ee3441c5e54f', 'scsi-SQEMU_QEMU_HARDDISK_a7deaf5a-cd70-43cd-92ab-ee3441c5e54f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068973 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--bc85b6b7--69fe--55db--81a6--3a78775dfc6c-osd--block--bc85b6b7--69fe--55db--81a6--3a78775dfc6c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9jeWCi-9DLp-UlhN-eHDh-lDvy-Uc3o-jpevWg', 'scsi-0QEMU_QEMU_HARDDISK_dd7becb9-0584-4efc-8944-d51272ed61fa', 'scsi-SQEMU_QEMU_HARDDISK_dd7becb9-0584-4efc-8944-d51272ed61fa'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068981 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0a90ba68-315a-4ce4-a803-8ffceb4dacc1', 'scsi-SQEMU_QEMU_HARDDISK_0a90ba68-315a-4ce4-a803-8ffceb4dacc1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068988 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:00:51.068992 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:00:51.069001 | orchestrator | 2026-03-17 01:00:51.069005 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-17 01:00:51.069008 | orchestrator | Tuesday 17 March 2026 00:58:59 +0000 (0:00:00.489) 0:00:16.674 ********* 2026-03-17 01:00:51.069012 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:00:51.069016 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:00:51.069020 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:00:51.069024 | orchestrator | 2026-03-17 01:00:51.069028 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-17 01:00:51.069031 | orchestrator | Tuesday 17 March 2026 00:59:00 +0000 (0:00:00.689) 0:00:17.363 ********* 2026-03-17 01:00:51.069035 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:00:51.069039 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:00:51.069043 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:00:51.069046 | orchestrator | 2026-03-17 01:00:51.069050 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-17 01:00:51.069054 | orchestrator | Tuesday 17 March 2026 00:59:00 +0000 (0:00:00.401) 0:00:17.765 ********* 2026-03-17 01:00:51.069058 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:00:51.069061 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:00:51.069065 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:00:51.069069 | orchestrator | 2026-03-17 01:00:51.069073 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-17 01:00:51.069077 | orchestrator | Tuesday 17 March 2026 00:59:01 +0000 (0:00:00.552) 0:00:18.317 ********* 2026-03-17 01:00:51.069080 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:00:51.069084 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:00:51.069088 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:00:51.069092 | orchestrator | 2026-03-17 01:00:51.069095 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-17 01:00:51.069099 | orchestrator | Tuesday 17 March 2026 00:59:01 +0000 (0:00:00.252) 0:00:18.570 ********* 2026-03-17 01:00:51.069103 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:00:51.069108 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:00:51.069114 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:00:51.069125 | orchestrator | 2026-03-17 01:00:51.069129 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-17 01:00:51.069133 | orchestrator | Tuesday 17 March 2026 00:59:01 +0000 (0:00:00.363) 0:00:18.933 ********* 2026-03-17 01:00:51.069137 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:00:51.069141 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:00:51.069144 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:00:51.069148 | orchestrator | 2026-03-17 01:00:51.069152 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-17 01:00:51.069155 | orchestrator | Tuesday 17 March 2026 00:59:02 +0000 (0:00:00.400) 0:00:19.333 ********* 2026-03-17 01:00:51.069159 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-17 01:00:51.069163 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-17 01:00:51.069168 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-17 01:00:51.069174 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-17 01:00:51.069180 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-17 01:00:51.069186 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-17 01:00:51.069192 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-17 01:00:51.069198 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-17 01:00:51.069204 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-17 01:00:51.069210 | orchestrator | 2026-03-17 01:00:51.069217 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-17 01:00:51.069224 | orchestrator | Tuesday 17 March 2026 00:59:03 +0000 (0:00:00.733) 0:00:20.066 ********* 2026-03-17 01:00:51.069229 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-17 01:00:51.069237 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-17 01:00:51.069241 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-17 01:00:51.069244 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:00:51.069250 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-17 01:00:51.069254 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-17 01:00:51.069258 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-17 01:00:51.069262 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:00:51.069265 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-17 01:00:51.069269 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-17 01:00:51.069273 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-17 01:00:51.069277 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:00:51.069280 | orchestrator | 2026-03-17 01:00:51.069284 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-17 01:00:51.069288 | orchestrator | Tuesday 17 March 2026 00:59:03 +0000 (0:00:00.296) 0:00:20.363 ********* 2026-03-17 01:00:51.069292 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:00:51.069296 | orchestrator | 2026-03-17 01:00:51.069300 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-17 01:00:51.069304 | orchestrator | Tuesday 17 March 2026 00:59:03 +0000 (0:00:00.593) 0:00:20.957 ********* 2026-03-17 01:00:51.069310 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:00:51.069316 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:00:51.069323 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:00:51.069329 | orchestrator | 2026-03-17 01:00:51.069334 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-17 01:00:51.069338 | orchestrator | Tuesday 17 March 2026 00:59:04 +0000 (0:00:00.298) 0:00:21.255 ********* 2026-03-17 01:00:51.069342 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:00:51.069346 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:00:51.069354 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:00:51.069358 | orchestrator | 2026-03-17 01:00:51.069362 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-17 01:00:51.069365 | orchestrator | Tuesday 17 March 2026 00:59:04 +0000 (0:00:00.261) 0:00:21.516 ********* 2026-03-17 01:00:51.069369 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:00:51.069373 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:00:51.069377 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:00:51.069380 | orchestrator | 2026-03-17 01:00:51.069384 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-17 01:00:51.069388 | orchestrator | Tuesday 17 March 2026 00:59:04 +0000 (0:00:00.272) 0:00:21.789 ********* 2026-03-17 01:00:51.069391 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:00:51.069395 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:00:51.069399 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:00:51.069403 | orchestrator | 2026-03-17 01:00:51.069406 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-17 01:00:51.069410 | orchestrator | Tuesday 17 March 2026 00:59:05 +0000 (0:00:00.631) 0:00:22.420 ********* 2026-03-17 01:00:51.069414 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 01:00:51.069417 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 01:00:51.069421 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 01:00:51.069425 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:00:51.069429 | orchestrator | 2026-03-17 01:00:51.069432 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-17 01:00:51.069436 | orchestrator | Tuesday 17 March 2026 00:59:05 +0000 (0:00:00.336) 0:00:22.757 ********* 2026-03-17 01:00:51.069440 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 01:00:51.069443 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 01:00:51.069447 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 01:00:51.069451 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:00:51.069455 | orchestrator | 2026-03-17 01:00:51.069458 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-17 01:00:51.069462 | orchestrator | Tuesday 17 March 2026 00:59:06 +0000 (0:00:00.324) 0:00:23.081 ********* 2026-03-17 01:00:51.069466 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 01:00:51.069470 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 01:00:51.069473 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 01:00:51.069477 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:00:51.069481 | orchestrator | 2026-03-17 01:00:51.069484 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-17 01:00:51.069488 | orchestrator | Tuesday 17 March 2026 00:59:06 +0000 (0:00:00.319) 0:00:23.401 ********* 2026-03-17 01:00:51.069492 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:00:51.069496 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:00:51.069499 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:00:51.069503 | orchestrator | 2026-03-17 01:00:51.069507 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-17 01:00:51.069510 | orchestrator | Tuesday 17 March 2026 00:59:06 +0000 (0:00:00.264) 0:00:23.666 ********* 2026-03-17 01:00:51.069514 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-17 01:00:51.069518 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-17 01:00:51.069522 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-17 01:00:51.069526 | orchestrator | 2026-03-17 01:00:51.069529 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-17 01:00:51.069533 | orchestrator | Tuesday 17 March 2026 00:59:07 +0000 (0:00:00.414) 0:00:24.080 ********* 2026-03-17 01:00:51.069537 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-17 01:00:51.069541 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-17 01:00:51.069593 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-17 01:00:51.069598 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-17 01:00:51.069607 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-17 01:00:51.069611 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-17 01:00:51.069615 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-17 01:00:51.069618 | orchestrator | 2026-03-17 01:00:51.069622 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-17 01:00:51.069626 | orchestrator | Tuesday 17 March 2026 00:59:07 +0000 (0:00:00.829) 0:00:24.909 ********* 2026-03-17 01:00:51.069629 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-17 01:00:51.069633 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-17 01:00:51.069637 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-17 01:00:51.069641 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-17 01:00:51.069644 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-17 01:00:51.069648 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-17 01:00:51.069655 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-17 01:00:51.069659 | orchestrator | 2026-03-17 01:00:51.069663 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-17 01:00:51.069667 | orchestrator | Tuesday 17 March 2026 00:59:09 +0000 (0:00:01.584) 0:00:26.494 ********* 2026-03-17 01:00:51.069670 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:00:51.069674 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:00:51.069678 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-17 01:00:51.069682 | orchestrator | 2026-03-17 01:00:51.069717 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-17 01:00:51.069723 | orchestrator | Tuesday 17 March 2026 00:59:09 +0000 (0:00:00.335) 0:00:26.830 ********* 2026-03-17 01:00:51.069728 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-17 01:00:51.069732 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-17 01:00:51.069736 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-17 01:00:51.069740 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-17 01:00:51.069744 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-17 01:00:51.069748 | orchestrator | 2026-03-17 01:00:51.069752 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-17 01:00:51.069759 | orchestrator | Tuesday 17 March 2026 00:59:54 +0000 (0:00:44.860) 0:01:11.691 ********* 2026-03-17 01:00:51.069763 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:00:51.069767 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:00:51.069771 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:00:51.069774 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:00:51.069778 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:00:51.069782 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:00:51.069786 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-17 01:00:51.069789 | orchestrator | 2026-03-17 01:00:51.069793 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-17 01:00:51.069797 | orchestrator | Tuesday 17 March 2026 01:00:19 +0000 (0:00:25.137) 0:01:36.828 ********* 2026-03-17 01:00:51.069801 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:00:51.069805 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:00:51.069837 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:00:51.069844 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:00:51.069850 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:00:51.069857 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:00:51.069864 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-17 01:00:51.069867 | orchestrator | 2026-03-17 01:00:51.069871 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-17 01:00:51.069875 | orchestrator | Tuesday 17 March 2026 01:00:31 +0000 (0:00:11.986) 0:01:48.815 ********* 2026-03-17 01:00:51.069879 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:00:51.069882 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-17 01:00:51.069886 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-17 01:00:51.069890 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:00:51.069894 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-17 01:00:51.069900 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-17 01:00:51.069904 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:00:51.069908 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-17 01:00:51.069912 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-17 01:00:51.069915 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:00:51.069919 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-17 01:00:51.069923 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-17 01:00:51.069926 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:00:51.069930 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-17 01:00:51.069934 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-17 01:00:51.069938 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:00:51.069941 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-17 01:00:51.069949 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-17 01:00:51.069953 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-17 01:00:51.069957 | orchestrator | 2026-03-17 01:00:51.069961 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:00:51.069964 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-17 01:00:51.069969 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-17 01:00:51.069973 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-17 01:00:51.069976 | orchestrator | 2026-03-17 01:00:51.069980 | orchestrator | 2026-03-17 01:00:51.069984 | orchestrator | 2026-03-17 01:00:51.069988 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:00:51.069992 | orchestrator | Tuesday 17 March 2026 01:00:49 +0000 (0:00:17.907) 0:02:06.723 ********* 2026-03-17 01:00:51.069995 | orchestrator | =============================================================================== 2026-03-17 01:00:51.069999 | orchestrator | create openstack pool(s) ----------------------------------------------- 44.86s 2026-03-17 01:00:51.070003 | orchestrator | generate keys ---------------------------------------------------------- 25.14s 2026-03-17 01:00:51.070006 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.91s 2026-03-17 01:00:51.070010 | orchestrator | get keys from monitors ------------------------------------------------- 11.99s 2026-03-17 01:00:51.070039 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.31s 2026-03-17 01:00:51.070044 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.86s 2026-03-17 01:00:51.070047 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.58s 2026-03-17 01:00:51.070051 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.83s 2026-03-17 01:00:51.070055 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.81s 2026-03-17 01:00:51.070058 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.80s 2026-03-17 01:00:51.070062 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.73s 2026-03-17 01:00:51.070066 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.69s 2026-03-17 01:00:51.070070 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.63s 2026-03-17 01:00:51.070073 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.62s 2026-03-17 01:00:51.070077 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.61s 2026-03-17 01:00:51.070084 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.61s 2026-03-17 01:00:51.070087 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.59s 2026-03-17 01:00:51.070091 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.58s 2026-03-17 01:00:51.070095 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.55s 2026-03-17 01:00:51.070099 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.49s 2026-03-17 01:00:51.070102 | orchestrator | 2026-03-17 01:00:51 | INFO  | Task 68b73dd8-434c-462d-80d2-5e611fa6789c is in state STARTED 2026-03-17 01:00:51.070106 | orchestrator | 2026-03-17 01:00:51 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:00:51.070110 | orchestrator | 2026-03-17 01:00:51 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:54.127351 | orchestrator | 2026-03-17 01:00:54 | INFO  | Task 68b73dd8-434c-462d-80d2-5e611fa6789c is in state STARTED 2026-03-17 01:00:54.129661 | orchestrator | 2026-03-17 01:00:54 | INFO  | Task 5636d993-f15c-47b1-a0b6-619c2b8eb795 is in state STARTED 2026-03-17 01:00:54.132805 | orchestrator | 2026-03-17 01:00:54 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:00:54.133502 | orchestrator | 2026-03-17 01:00:54 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:57.163018 | orchestrator | 2026-03-17 01:00:57 | INFO  | Task 68b73dd8-434c-462d-80d2-5e611fa6789c is in state STARTED 2026-03-17 01:00:57.164990 | orchestrator | 2026-03-17 01:00:57 | INFO  | Task 5636d993-f15c-47b1-a0b6-619c2b8eb795 is in state STARTED 2026-03-17 01:00:57.166780 | orchestrator | 2026-03-17 01:00:57 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:00:57.166909 | orchestrator | 2026-03-17 01:00:57 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:00.210614 | orchestrator | 2026-03-17 01:01:00 | INFO  | Task 68b73dd8-434c-462d-80d2-5e611fa6789c is in state STARTED 2026-03-17 01:01:00.211157 | orchestrator | 2026-03-17 01:01:00 | INFO  | Task 5636d993-f15c-47b1-a0b6-619c2b8eb795 is in state STARTED 2026-03-17 01:01:00.212317 | orchestrator | 2026-03-17 01:01:00 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:01:00.212343 | orchestrator | 2026-03-17 01:01:00 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:03.252495 | orchestrator | 2026-03-17 01:01:03 | INFO  | Task 68b73dd8-434c-462d-80d2-5e611fa6789c is in state STARTED 2026-03-17 01:01:03.254419 | orchestrator | 2026-03-17 01:01:03 | INFO  | Task 5636d993-f15c-47b1-a0b6-619c2b8eb795 is in state STARTED 2026-03-17 01:01:03.256053 | orchestrator | 2026-03-17 01:01:03 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:01:03.256102 | orchestrator | 2026-03-17 01:01:03 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:06.315163 | orchestrator | 2026-03-17 01:01:06 | INFO  | Task 68b73dd8-434c-462d-80d2-5e611fa6789c is in state STARTED 2026-03-17 01:01:06.315236 | orchestrator | 2026-03-17 01:01:06 | INFO  | Task 5636d993-f15c-47b1-a0b6-619c2b8eb795 is in state STARTED 2026-03-17 01:01:06.315245 | orchestrator | 2026-03-17 01:01:06 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:01:06.315308 | orchestrator | 2026-03-17 01:01:06 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:09.383265 | orchestrator | 2026-03-17 01:01:09.383361 | orchestrator | 2026-03-17 01:01:09.383373 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:01:09.383381 | orchestrator | 2026-03-17 01:01:09.383388 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:01:09.383396 | orchestrator | Tuesday 17 March 2026 00:59:30 +0000 (0:00:00.225) 0:00:00.225 ********* 2026-03-17 01:01:09.383403 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:01:09.383411 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:01:09.383419 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:01:09.383425 | orchestrator | 2026-03-17 01:01:09.383433 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:01:09.383439 | orchestrator | Tuesday 17 March 2026 00:59:30 +0000 (0:00:00.249) 0:00:00.475 ********* 2026-03-17 01:01:09.383446 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-17 01:01:09.383454 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-17 01:01:09.383461 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-17 01:01:09.383468 | orchestrator | 2026-03-17 01:01:09.383474 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-17 01:01:09.383481 | orchestrator | 2026-03-17 01:01:09.383487 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-17 01:01:09.383628 | orchestrator | Tuesday 17 March 2026 00:59:31 +0000 (0:00:00.337) 0:00:00.813 ********* 2026-03-17 01:01:09.383639 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:01:09.383647 | orchestrator | 2026-03-17 01:01:09.383668 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-17 01:01:09.383675 | orchestrator | Tuesday 17 March 2026 00:59:31 +0000 (0:00:00.420) 0:00:01.233 ********* 2026-03-17 01:01:09.383689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 01:01:09.383880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 01:01:09.383904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 01:01:09.383910 | orchestrator | 2026-03-17 01:01:09.383917 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-17 01:01:09.383923 | orchestrator | Tuesday 17 March 2026 00:59:32 +0000 (0:00:01.101) 0:00:02.334 ********* 2026-03-17 01:01:09.383930 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:01:09.383936 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:01:09.383942 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:01:09.383948 | orchestrator | 2026-03-17 01:01:09.383954 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-17 01:01:09.383961 | orchestrator | Tuesday 17 March 2026 00:59:33 +0000 (0:00:00.340) 0:00:02.674 ********* 2026-03-17 01:01:09.383976 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-17 01:01:09.383983 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-17 01:01:09.383989 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-17 01:01:09.384001 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-17 01:01:09.384008 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-17 01:01:09.384014 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-17 01:01:09.384081 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-17 01:01:09.384088 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-17 01:01:09.384094 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-17 01:01:09.384099 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-17 01:01:09.384105 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-17 01:01:09.384111 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-17 01:01:09.384122 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-17 01:01:09.384128 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-17 01:01:09.384133 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-17 01:01:09.384139 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-17 01:01:09.384144 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-17 01:01:09.384150 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-17 01:01:09.384156 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-17 01:01:09.384161 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-17 01:01:09.384166 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-17 01:01:09.384172 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-17 01:01:09.384177 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-17 01:01:09.384183 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-17 01:01:09.384190 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-17 01:01:09.384198 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-17 01:01:09.384204 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-17 01:01:09.384210 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-17 01:01:09.384216 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-17 01:01:09.384221 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-17 01:01:09.384226 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-17 01:01:09.384232 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-17 01:01:09.384237 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-17 01:01:09.384252 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-17 01:01:09.384257 | orchestrator | 2026-03-17 01:01:09.384263 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-17 01:01:09.384269 | orchestrator | Tuesday 17 March 2026 00:59:33 +0000 (0:00:00.661) 0:00:03.336 ********* 2026-03-17 01:01:09.384274 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:01:09.384279 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:01:09.384285 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:01:09.384290 | orchestrator | 2026-03-17 01:01:09.384296 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-17 01:01:09.384302 | orchestrator | Tuesday 17 March 2026 00:59:34 +0000 (0:00:00.268) 0:00:03.604 ********* 2026-03-17 01:01:09.384314 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:09.384322 | orchestrator | 2026-03-17 01:01:09.384327 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-17 01:01:09.384333 | orchestrator | Tuesday 17 March 2026 00:59:34 +0000 (0:00:00.117) 0:00:03.721 ********* 2026-03-17 01:01:09.384338 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:09.384344 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:01:09.384349 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:01:09.384355 | orchestrator | 2026-03-17 01:01:09.384360 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-17 01:01:09.384366 | orchestrator | Tuesday 17 March 2026 00:59:34 +0000 (0:00:00.362) 0:00:04.083 ********* 2026-03-17 01:01:09.384372 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:01:09.384378 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:01:09.384383 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:01:09.384389 | orchestrator | 2026-03-17 01:01:09.384395 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-17 01:01:09.384400 | orchestrator | Tuesday 17 March 2026 00:59:34 +0000 (0:00:00.250) 0:00:04.334 ********* 2026-03-17 01:01:09.384406 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:09.384412 | orchestrator | 2026-03-17 01:01:09.384417 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-17 01:01:09.384423 | orchestrator | Tuesday 17 March 2026 00:59:34 +0000 (0:00:00.110) 0:00:04.444 ********* 2026-03-17 01:01:09.384428 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:09.384434 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:01:09.384439 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:01:09.384445 | orchestrator | 2026-03-17 01:01:09.384456 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-17 01:01:09.384462 | orchestrator | Tuesday 17 March 2026 00:59:35 +0000 (0:00:00.249) 0:00:04.693 ********* 2026-03-17 01:01:09.384468 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:01:09.384473 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:01:09.384479 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:01:09.384485 | orchestrator | 2026-03-17 01:01:09.384491 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-17 01:01:09.384497 | orchestrator | Tuesday 17 March 2026 00:59:35 +0000 (0:00:00.281) 0:00:04.975 ********* 2026-03-17 01:01:09.384503 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:09.384509 | orchestrator | 2026-03-17 01:01:09.384515 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-17 01:01:09.384521 | orchestrator | Tuesday 17 March 2026 00:59:35 +0000 (0:00:00.237) 0:00:05.212 ********* 2026-03-17 01:01:09.384527 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:09.384533 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:01:09.384539 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:01:09.384544 | orchestrator | 2026-03-17 01:01:09.384550 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-17 01:01:09.384556 | orchestrator | Tuesday 17 March 2026 00:59:35 +0000 (0:00:00.265) 0:00:05.477 ********* 2026-03-17 01:01:09.384569 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:01:09.384575 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:01:09.384581 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:01:09.384587 | orchestrator | 2026-03-17 01:01:09.384593 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-17 01:01:09.384600 | orchestrator | Tuesday 17 March 2026 00:59:36 +0000 (0:00:00.271) 0:00:05.749 ********* 2026-03-17 01:01:09.384606 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:09.384612 | orchestrator | 2026-03-17 01:01:09.384618 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-17 01:01:09.384623 | orchestrator | Tuesday 17 March 2026 00:59:36 +0000 (0:00:00.114) 0:00:05.864 ********* 2026-03-17 01:01:09.384629 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:09.384635 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:01:09.384641 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:01:09.384647 | orchestrator | 2026-03-17 01:01:09.384654 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-17 01:01:09.384660 | orchestrator | Tuesday 17 March 2026 00:59:36 +0000 (0:00:00.252) 0:00:06.116 ********* 2026-03-17 01:01:09.384666 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:01:09.384672 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:01:09.384678 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:01:09.384685 | orchestrator | 2026-03-17 01:01:09.384691 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-17 01:01:09.384698 | orchestrator | Tuesday 17 March 2026 00:59:37 +0000 (0:00:00.372) 0:00:06.489 ********* 2026-03-17 01:01:09.384704 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:09.384710 | orchestrator | 2026-03-17 01:01:09.384717 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-17 01:01:09.384723 | orchestrator | Tuesday 17 March 2026 00:59:37 +0000 (0:00:00.123) 0:00:06.613 ********* 2026-03-17 01:01:09.384729 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:09.384736 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:01:09.384742 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:01:09.384748 | orchestrator | 2026-03-17 01:01:09.384755 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-17 01:01:09.384762 | orchestrator | Tuesday 17 March 2026 00:59:37 +0000 (0:00:00.259) 0:00:06.872 ********* 2026-03-17 01:01:09.384768 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:01:09.384774 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:01:09.384781 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:01:09.384788 | orchestrator | 2026-03-17 01:01:09.384794 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-17 01:01:09.384835 | orchestrator | Tuesday 17 March 2026 00:59:37 +0000 (0:00:00.283) 0:00:07.156 ********* 2026-03-17 01:01:09.384842 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:09.384849 | orchestrator | 2026-03-17 01:01:09.384855 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-17 01:01:09.384862 | orchestrator | Tuesday 17 March 2026 00:59:37 +0000 (0:00:00.134) 0:00:07.291 ********* 2026-03-17 01:01:09.384868 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:09.384874 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:01:09.384881 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:01:09.384887 | orchestrator | 2026-03-17 01:01:09.384894 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-17 01:01:09.384909 | orchestrator | Tuesday 17 March 2026 00:59:38 +0000 (0:00:00.290) 0:00:07.581 ********* 2026-03-17 01:01:09.384915 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:01:09.384922 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:01:09.384930 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:01:09.384937 | orchestrator | 2026-03-17 01:01:09.384943 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-17 01:01:09.384949 | orchestrator | Tuesday 17 March 2026 00:59:38 +0000 (0:00:00.492) 0:00:08.074 ********* 2026-03-17 01:01:09.384955 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:09.384979 | orchestrator | 2026-03-17 01:01:09.384986 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-17 01:01:09.384993 | orchestrator | Tuesday 17 March 2026 00:59:38 +0000 (0:00:00.127) 0:00:08.202 ********* 2026-03-17 01:01:09.385000 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:09.385007 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:01:09.385014 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:01:09.385021 | orchestrator | 2026-03-17 01:01:09.385027 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-17 01:01:09.385035 | orchestrator | Tuesday 17 March 2026 00:59:39 +0000 (0:00:00.305) 0:00:08.508 ********* 2026-03-17 01:01:09.385042 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:01:09.385049 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:01:09.385056 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:01:09.385063 | orchestrator | 2026-03-17 01:01:09.385069 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-17 01:01:09.385076 | orchestrator | Tuesday 17 March 2026 00:59:39 +0000 (0:00:00.295) 0:00:08.803 ********* 2026-03-17 01:01:09.385089 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:09.385096 | orchestrator | 2026-03-17 01:01:09.385103 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-17 01:01:09.385110 | orchestrator | Tuesday 17 March 2026 00:59:39 +0000 (0:00:00.120) 0:00:08.923 ********* 2026-03-17 01:01:09.385117 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:09.385124 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:01:09.385131 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:01:09.385138 | orchestrator | 2026-03-17 01:01:09.385145 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-17 01:01:09.385152 | orchestrator | Tuesday 17 March 2026 00:59:39 +0000 (0:00:00.453) 0:00:09.377 ********* 2026-03-17 01:01:09.385159 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:01:09.385166 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:01:09.385172 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:01:09.385179 | orchestrator | 2026-03-17 01:01:09.385186 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-17 01:01:09.385193 | orchestrator | Tuesday 17 March 2026 00:59:40 +0000 (0:00:00.363) 0:00:09.741 ********* 2026-03-17 01:01:09.385200 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:09.385207 | orchestrator | 2026-03-17 01:01:09.385214 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-17 01:01:09.385220 | orchestrator | Tuesday 17 March 2026 00:59:40 +0000 (0:00:00.128) 0:00:09.869 ********* 2026-03-17 01:01:09.385227 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:09.385233 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:01:09.385239 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:01:09.385245 | orchestrator | 2026-03-17 01:01:09.385250 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-17 01:01:09.385256 | orchestrator | Tuesday 17 March 2026 00:59:40 +0000 (0:00:00.283) 0:00:10.153 ********* 2026-03-17 01:01:09.385262 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:01:09.385268 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:01:09.385275 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:01:09.385281 | orchestrator | 2026-03-17 01:01:09.385288 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-17 01:01:09.385294 | orchestrator | Tuesday 17 March 2026 00:59:40 +0000 (0:00:00.313) 0:00:10.466 ********* 2026-03-17 01:01:09.385301 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:09.385307 | orchestrator | 2026-03-17 01:01:09.385313 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-17 01:01:09.385319 | orchestrator | Tuesday 17 March 2026 00:59:41 +0000 (0:00:00.127) 0:00:10.594 ********* 2026-03-17 01:01:09.385326 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:09.385332 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:01:09.385338 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:01:09.385344 | orchestrator | 2026-03-17 01:01:09.385355 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-17 01:01:09.385362 | orchestrator | Tuesday 17 March 2026 00:59:41 +0000 (0:00:00.468) 0:00:11.062 ********* 2026-03-17 01:01:09.385368 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:01:09.385374 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:01:09.385379 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:01:09.385384 | orchestrator | 2026-03-17 01:01:09.385390 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-17 01:01:09.385395 | orchestrator | Tuesday 17 March 2026 00:59:43 +0000 (0:00:01.438) 0:00:12.501 ********* 2026-03-17 01:01:09.385402 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-17 01:01:09.385409 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-17 01:01:09.385415 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-17 01:01:09.385421 | orchestrator | 2026-03-17 01:01:09.385427 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-17 01:01:09.385432 | orchestrator | Tuesday 17 March 2026 00:59:44 +0000 (0:00:01.648) 0:00:14.149 ********* 2026-03-17 01:01:09.385439 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-17 01:01:09.385446 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-17 01:01:09.385453 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-17 01:01:09.385459 | orchestrator | 2026-03-17 01:01:09.385473 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-17 01:01:09.385480 | orchestrator | Tuesday 17 March 2026 00:59:46 +0000 (0:00:02.105) 0:00:16.255 ********* 2026-03-17 01:01:09.385486 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-17 01:01:09.385493 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-17 01:01:09.385500 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-17 01:01:09.385507 | orchestrator | 2026-03-17 01:01:09.385514 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-17 01:01:09.385521 | orchestrator | Tuesday 17 March 2026 00:59:48 +0000 (0:00:02.115) 0:00:18.370 ********* 2026-03-17 01:01:09.385527 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:09.385534 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:01:09.385541 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:01:09.385547 | orchestrator | 2026-03-17 01:01:09.385553 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-17 01:01:09.385560 | orchestrator | Tuesday 17 March 2026 00:59:49 +0000 (0:00:00.286) 0:00:18.657 ********* 2026-03-17 01:01:09.385566 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:09.385573 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:01:09.385579 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:01:09.385585 | orchestrator | 2026-03-17 01:01:09.385597 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-17 01:01:09.385604 | orchestrator | Tuesday 17 March 2026 00:59:49 +0000 (0:00:00.285) 0:00:18.942 ********* 2026-03-17 01:01:09.385611 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:01:09.385618 | orchestrator | 2026-03-17 01:01:09.385624 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-17 01:01:09.385631 | orchestrator | Tuesday 17 March 2026 00:59:50 +0000 (0:00:00.733) 0:00:19.676 ********* 2026-03-17 01:01:09.385642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 01:01:09.385670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 01:01:09.385685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 01:01:09.385692 | orchestrator | 2026-03-17 01:01:09.385699 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-17 01:01:09.385706 | orchestrator | Tuesday 17 March 2026 00:59:51 +0000 (0:00:01.441) 0:00:21.117 ********* 2026-03-17 01:01:09.385723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-17 01:01:09.385737 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:09.385749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port'2026-03-17 01:01:09 | INFO  | Task 68b73dd8-434c-462d-80d2-5e611fa6789c is in state SUCCESS 2026-03-17 01:01:09.385758 | orchestrator | : '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-17 01:01:09.385766 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:01:09.385777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-17 01:01:09.385789 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:01:09.385796 | orchestrator | 2026-03-17 01:01:09.385864 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-17 01:01:09.385870 | orchestrator | Tuesday 17 March 2026 00:59:52 +0000 (0:00:00.751) 0:00:21.868 ********* 2026-03-17 01:01:09.385884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-17 01:01:09.385951 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:09.385966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-17 01:01:09.385980 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:01:09.385997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-17 01:01:09.386066 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:01:09.386075 | orchestrator | 2026-03-17 01:01:09.386082 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-03-17 01:01:09.386090 | orchestrator | Tuesday 17 March 2026 00:59:53 +0000 (0:00:00.763) 0:00:22.631 ********* 2026-03-17 01:01:09.386097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 01:01:09.386120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 01:01:09.386134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 01:01:09.386142 | orchestrator | 2026-03-17 01:01:09.386149 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-17 01:01:09.386156 | orchestrator | Tuesday 17 March 2026 00:59:54 +0000 (0:00:01.514) 0:00:24.146 ********* 2026-03-17 01:01:09.386164 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:01:09.386171 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:01:09.386179 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:01:09.386186 | orchestrator | 2026-03-17 01:01:09.386198 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-17 01:01:09.386206 | orchestrator | Tuesday 17 March 2026 00:59:55 +0000 (0:00:00.371) 0:00:24.517 ********* 2026-03-17 01:01:09.386213 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:01:09.386221 | orchestrator | 2026-03-17 01:01:09.386228 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-17 01:01:09.386236 | orchestrator | Tuesday 17 March 2026 00:59:55 +0000 (0:00:00.519) 0:00:25.036 ********* 2026-03-17 01:01:09.386243 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:01:09.386256 | orchestrator | 2026-03-17 01:01:09.386263 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-17 01:01:09.386272 | orchestrator | Tuesday 17 March 2026 00:59:58 +0000 (0:00:02.825) 0:00:27.862 ********* 2026-03-17 01:01:09.386280 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:01:09.386287 | orchestrator | 2026-03-17 01:01:09.386294 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-17 01:01:09.386335 | orchestrator | Tuesday 17 March 2026 01:00:01 +0000 (0:00:02.996) 0:00:30.859 ********* 2026-03-17 01:01:09.386342 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:01:09.386353 | orchestrator | 2026-03-17 01:01:09.386359 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-17 01:01:09.386364 | orchestrator | Tuesday 17 March 2026 01:00:18 +0000 (0:00:16.976) 0:00:47.835 ********* 2026-03-17 01:01:09.386370 | orchestrator | 2026-03-17 01:01:09.386380 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-17 01:01:09.386386 | orchestrator | Tuesday 17 March 2026 01:00:18 +0000 (0:00:00.065) 0:00:47.900 ********* 2026-03-17 01:01:09.386406 | orchestrator | 2026-03-17 01:01:09.386412 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-17 01:01:09.386426 | orchestrator | Tuesday 17 March 2026 01:00:18 +0000 (0:00:00.064) 0:00:47.964 ********* 2026-03-17 01:01:09.386432 | orchestrator | 2026-03-17 01:01:09.386438 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-17 01:01:09.386445 | orchestrator | Tuesday 17 March 2026 01:00:18 +0000 (0:00:00.066) 0:00:48.031 ********* 2026-03-17 01:01:09.386451 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:01:09.386461 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:01:09.386467 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:01:09.386473 | orchestrator | 2026-03-17 01:01:09.386480 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:01:09.386487 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-17 01:01:09.386495 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-17 01:01:09.386502 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-17 01:01:09.386509 | orchestrator | 2026-03-17 01:01:09.386519 | orchestrator | 2026-03-17 01:01:09.386529 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:01:09.386535 | orchestrator | Tuesday 17 March 2026 01:01:07 +0000 (0:00:48.861) 0:01:36.893 ********* 2026-03-17 01:01:09.386541 | orchestrator | =============================================================================== 2026-03-17 01:01:09.386548 | orchestrator | horizon : Restart horizon container ------------------------------------ 48.86s 2026-03-17 01:01:09.386554 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.98s 2026-03-17 01:01:09.386561 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 3.00s 2026-03-17 01:01:09.386567 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.83s 2026-03-17 01:01:09.386574 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.12s 2026-03-17 01:01:09.386581 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.11s 2026-03-17 01:01:09.386587 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.65s 2026-03-17 01:01:09.386594 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.51s 2026-03-17 01:01:09.386600 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.44s 2026-03-17 01:01:09.386608 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.44s 2026-03-17 01:01:09.386614 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.10s 2026-03-17 01:01:09.386629 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.76s 2026-03-17 01:01:09.386635 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.75s 2026-03-17 01:01:09.386642 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.73s 2026-03-17 01:01:09.386648 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.66s 2026-03-17 01:01:09.386654 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.52s 2026-03-17 01:01:09.386661 | orchestrator | horizon : Update policy file name --------------------------------------- 0.49s 2026-03-17 01:01:09.386668 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.47s 2026-03-17 01:01:09.386674 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.45s 2026-03-17 01:01:09.386681 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.42s 2026-03-17 01:01:09.386693 | orchestrator | 2026-03-17 01:01:09 | INFO  | Task 5636d993-f15c-47b1-a0b6-619c2b8eb795 is in state STARTED 2026-03-17 01:01:09.386701 | orchestrator | 2026-03-17 01:01:09 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:01:09.386707 | orchestrator | 2026-03-17 01:01:09 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:12.415468 | orchestrator | 2026-03-17 01:01:12 | INFO  | Task 5636d993-f15c-47b1-a0b6-619c2b8eb795 is in state STARTED 2026-03-17 01:01:12.415529 | orchestrator | 2026-03-17 01:01:12 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:01:12.415538 | orchestrator | 2026-03-17 01:01:12 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:15.464117 | orchestrator | 2026-03-17 01:01:15 | INFO  | Task 5636d993-f15c-47b1-a0b6-619c2b8eb795 is in state STARTED 2026-03-17 01:01:15.464161 | orchestrator | 2026-03-17 01:01:15 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:01:15.464166 | orchestrator | 2026-03-17 01:01:15 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:18.507662 | orchestrator | 2026-03-17 01:01:18 | INFO  | Task 5636d993-f15c-47b1-a0b6-619c2b8eb795 is in state STARTED 2026-03-17 01:01:18.507735 | orchestrator | 2026-03-17 01:01:18 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:01:18.507744 | orchestrator | 2026-03-17 01:01:18 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:21.555665 | orchestrator | 2026-03-17 01:01:21 | INFO  | Task 5636d993-f15c-47b1-a0b6-619c2b8eb795 is in state STARTED 2026-03-17 01:01:21.557655 | orchestrator | 2026-03-17 01:01:21 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:01:21.557695 | orchestrator | 2026-03-17 01:01:21 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:24.598376 | orchestrator | 2026-03-17 01:01:24 | INFO  | Task 5636d993-f15c-47b1-a0b6-619c2b8eb795 is in state STARTED 2026-03-17 01:01:24.599728 | orchestrator | 2026-03-17 01:01:24 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:01:24.599774 | orchestrator | 2026-03-17 01:01:24 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:27.643663 | orchestrator | 2026-03-17 01:01:27 | INFO  | Task b783ee0f-755f-4924-8fde-ac1d139a763c is in state STARTED 2026-03-17 01:01:27.643739 | orchestrator | 2026-03-17 01:01:27 | INFO  | Task 5636d993-f15c-47b1-a0b6-619c2b8eb795 is in state SUCCESS 2026-03-17 01:01:27.645958 | orchestrator | 2026-03-17 01:01:27 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:01:27.646416 | orchestrator | 2026-03-17 01:01:27 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:30.689953 | orchestrator | 2026-03-17 01:01:30 | INFO  | Task b783ee0f-755f-4924-8fde-ac1d139a763c is in state STARTED 2026-03-17 01:01:30.691231 | orchestrator | 2026-03-17 01:01:30 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:01:30.691274 | orchestrator | 2026-03-17 01:01:30 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:33.727647 | orchestrator | 2026-03-17 01:01:33 | INFO  | Task b783ee0f-755f-4924-8fde-ac1d139a763c is in state STARTED 2026-03-17 01:01:33.729289 | orchestrator | 2026-03-17 01:01:33 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:01:33.729349 | orchestrator | 2026-03-17 01:01:33 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:36.771701 | orchestrator | 2026-03-17 01:01:36 | INFO  | Task b783ee0f-755f-4924-8fde-ac1d139a763c is in state STARTED 2026-03-17 01:01:36.772985 | orchestrator | 2026-03-17 01:01:36 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:01:36.773227 | orchestrator | 2026-03-17 01:01:36 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:39.822854 | orchestrator | 2026-03-17 01:01:39 | INFO  | Task b783ee0f-755f-4924-8fde-ac1d139a763c is in state STARTED 2026-03-17 01:01:39.823071 | orchestrator | 2026-03-17 01:01:39 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:01:39.823088 | orchestrator | 2026-03-17 01:01:39 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:42.861149 | orchestrator | 2026-03-17 01:01:42 | INFO  | Task b783ee0f-755f-4924-8fde-ac1d139a763c is in state STARTED 2026-03-17 01:01:42.862611 | orchestrator | 2026-03-17 01:01:42 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:01:42.862657 | orchestrator | 2026-03-17 01:01:42 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:45.899694 | orchestrator | 2026-03-17 01:01:45 | INFO  | Task b783ee0f-755f-4924-8fde-ac1d139a763c is in state STARTED 2026-03-17 01:01:45.901992 | orchestrator | 2026-03-17 01:01:45 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:01:45.902185 | orchestrator | 2026-03-17 01:01:45 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:48.944605 | orchestrator | 2026-03-17 01:01:48 | INFO  | Task b783ee0f-755f-4924-8fde-ac1d139a763c is in state STARTED 2026-03-17 01:01:48.945859 | orchestrator | 2026-03-17 01:01:48 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:01:48.945917 | orchestrator | 2026-03-17 01:01:48 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:51.986269 | orchestrator | 2026-03-17 01:01:51 | INFO  | Task b783ee0f-755f-4924-8fde-ac1d139a763c is in state STARTED 2026-03-17 01:01:51.987458 | orchestrator | 2026-03-17 01:01:51 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:01:51.987500 | orchestrator | 2026-03-17 01:01:51 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:55.043205 | orchestrator | 2026-03-17 01:01:55 | INFO  | Task b783ee0f-755f-4924-8fde-ac1d139a763c is in state STARTED 2026-03-17 01:01:55.044094 | orchestrator | 2026-03-17 01:01:55 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:01:55.044146 | orchestrator | 2026-03-17 01:01:55 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:58.087918 | orchestrator | 2026-03-17 01:01:58 | INFO  | Task b783ee0f-755f-4924-8fde-ac1d139a763c is in state STARTED 2026-03-17 01:01:58.089815 | orchestrator | 2026-03-17 01:01:58 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:01:58.089892 | orchestrator | 2026-03-17 01:01:58 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:01.125218 | orchestrator | 2026-03-17 01:02:01 | INFO  | Task b783ee0f-755f-4924-8fde-ac1d139a763c is in state STARTED 2026-03-17 01:02:01.125561 | orchestrator | 2026-03-17 01:02:01 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state STARTED 2026-03-17 01:02:01.125578 | orchestrator | 2026-03-17 01:02:01 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:04.166006 | orchestrator | 2026-03-17 01:02:04 | INFO  | Task b783ee0f-755f-4924-8fde-ac1d139a763c is in state STARTED 2026-03-17 01:02:04.167071 | orchestrator | 2026-03-17 01:02:04 | INFO  | Task 451a45a9-66f3-4b0a-b7ce-621b535b0191 is in state SUCCESS 2026-03-17 01:02:04.168434 | orchestrator | 2026-03-17 01:02:04.168461 | orchestrator | 2026-03-17 01:02:04.168467 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-17 01:02:04.168472 | orchestrator | 2026-03-17 01:02:04.168477 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-17 01:02:04.168482 | orchestrator | Tuesday 17 March 2026 01:00:54 +0000 (0:00:00.142) 0:00:00.142 ********* 2026-03-17 01:02:04.168487 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-17 01:02:04.168493 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-17 01:02:04.168497 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-17 01:02:04.168502 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-17 01:02:04.168506 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-17 01:02:04.168511 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-17 01:02:04.168516 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-17 01:02:04.168520 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-17 01:02:04.168524 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-17 01:02:04.168528 | orchestrator | 2026-03-17 01:02:04.168532 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-17 01:02:04.168535 | orchestrator | Tuesday 17 March 2026 01:00:59 +0000 (0:00:05.185) 0:00:05.327 ********* 2026-03-17 01:02:04.168539 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-17 01:02:04.168543 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-17 01:02:04.168547 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-17 01:02:04.168551 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-17 01:02:04.168555 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-17 01:02:04.168559 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-17 01:02:04.168563 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-17 01:02:04.168566 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-17 01:02:04.168570 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-17 01:02:04.168574 | orchestrator | 2026-03-17 01:02:04.168578 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-17 01:02:04.168630 | orchestrator | Tuesday 17 March 2026 01:01:03 +0000 (0:00:03.735) 0:00:09.063 ********* 2026-03-17 01:02:04.168637 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-17 01:02:04.168641 | orchestrator | 2026-03-17 01:02:04.168645 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-17 01:02:04.168649 | orchestrator | Tuesday 17 March 2026 01:01:03 +0000 (0:00:00.987) 0:00:10.050 ********* 2026-03-17 01:02:04.168653 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-17 01:02:04.168657 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-17 01:02:04.168661 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-17 01:02:04.168671 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-17 01:02:04.168675 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-17 01:02:04.168714 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-17 01:02:04.168718 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-17 01:02:04.168866 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-17 01:02:04.168873 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-17 01:02:04.168877 | orchestrator | 2026-03-17 01:02:04.168881 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-17 01:02:04.168885 | orchestrator | Tuesday 17 March 2026 01:01:16 +0000 (0:00:12.238) 0:00:22.289 ********* 2026-03-17 01:02:04.168889 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-17 01:02:04.168893 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-17 01:02:04.168897 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-17 01:02:04.168901 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-17 01:02:04.168920 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-17 01:02:04.168925 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-17 01:02:04.168928 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-17 01:02:04.168932 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-17 01:02:04.168936 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-17 01:02:04.168940 | orchestrator | 2026-03-17 01:02:04.168944 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-17 01:02:04.168948 | orchestrator | Tuesday 17 March 2026 01:01:18 +0000 (0:00:02.535) 0:00:24.824 ********* 2026-03-17 01:02:04.168952 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-17 01:02:04.168956 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-17 01:02:04.168960 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-17 01:02:04.168964 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-17 01:02:04.168968 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-17 01:02:04.168971 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-17 01:02:04.168975 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-17 01:02:04.168981 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-17 01:02:04.168994 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-17 01:02:04.169001 | orchestrator | 2026-03-17 01:02:04.169007 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:02:04.169011 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:02:04.169015 | orchestrator | 2026-03-17 01:02:04.169019 | orchestrator | 2026-03-17 01:02:04.169023 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:02:04.169027 | orchestrator | Tuesday 17 March 2026 01:01:25 +0000 (0:00:06.583) 0:00:31.407 ********* 2026-03-17 01:02:04.169031 | orchestrator | =============================================================================== 2026-03-17 01:02:04.169034 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.24s 2026-03-17 01:02:04.169038 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.58s 2026-03-17 01:02:04.169042 | orchestrator | Check if ceph keys exist ------------------------------------------------ 5.19s 2026-03-17 01:02:04.169046 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 3.74s 2026-03-17 01:02:04.169050 | orchestrator | Check if target directories exist --------------------------------------- 2.54s 2026-03-17 01:02:04.169054 | orchestrator | Create share directory -------------------------------------------------- 0.99s 2026-03-17 01:02:04.169058 | orchestrator | 2026-03-17 01:02:04.169061 | orchestrator | 2026-03-17 01:02:04.169066 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:02:04.169069 | orchestrator | 2026-03-17 01:02:04.169073 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:02:04.169077 | orchestrator | Tuesday 17 March 2026 00:59:30 +0000 (0:00:00.223) 0:00:00.223 ********* 2026-03-17 01:02:04.169081 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:04.169085 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:04.169089 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:04.169092 | orchestrator | 2026-03-17 01:02:04.169096 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:02:04.169100 | orchestrator | Tuesday 17 March 2026 00:59:31 +0000 (0:00:00.258) 0:00:00.481 ********* 2026-03-17 01:02:04.169104 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-17 01:02:04.169108 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-17 01:02:04.169115 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-17 01:02:04.169119 | orchestrator | 2026-03-17 01:02:04.169123 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-17 01:02:04.169127 | orchestrator | 2026-03-17 01:02:04.169130 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-17 01:02:04.169134 | orchestrator | Tuesday 17 March 2026 00:59:31 +0000 (0:00:00.352) 0:00:00.833 ********* 2026-03-17 01:02:04.169138 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:02:04.169142 | orchestrator | 2026-03-17 01:02:04.169146 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-17 01:02:04.169150 | orchestrator | Tuesday 17 March 2026 00:59:31 +0000 (0:00:00.451) 0:00:01.284 ********* 2026-03-17 01:02:04.169172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 01:02:04.169184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 01:02:04.169189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 01:02:04.169196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:02:04.169201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:02:04.169216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:02:04.169225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:02:04.169229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:02:04.169233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:02:04.169237 | orchestrator | 2026-03-17 01:02:04.169242 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-17 01:02:04.169249 | orchestrator | Tuesday 17 March 2026 00:59:33 +0000 (0:00:01.862) 0:00:03.147 ********* 2026-03-17 01:02:04.169256 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:04.169261 | orchestrator | 2026-03-17 01:02:04.169265 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-17 01:02:04.169269 | orchestrator | Tuesday 17 March 2026 00:59:33 +0000 (0:00:00.120) 0:00:03.267 ********* 2026-03-17 01:02:04.169273 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:04.169277 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:04.169280 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:04.169284 | orchestrator | 2026-03-17 01:02:04.169288 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-17 01:02:04.169292 | orchestrator | Tuesday 17 March 2026 00:59:34 +0000 (0:00:00.336) 0:00:03.604 ********* 2026-03-17 01:02:04.169296 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 01:02:04.169300 | orchestrator | 2026-03-17 01:02:04.169304 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-17 01:02:04.169310 | orchestrator | Tuesday 17 March 2026 00:59:34 +0000 (0:00:00.726) 0:00:04.331 ********* 2026-03-17 01:02:04.169314 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:02:04.169318 | orchestrator | 2026-03-17 01:02:04.169322 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-17 01:02:04.169326 | orchestrator | Tuesday 17 March 2026 00:59:35 +0000 (0:00:00.473) 0:00:04.804 ********* 2026-03-17 01:02:04.169333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 01:02:04.169340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 01:02:04.169345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 01:02:04.169349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:02:04.169355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:02:04.169366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:02:04.169373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:02:04.169381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:02:04.169388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:02:04.169396 | orchestrator | 2026-03-17 01:02:04.169402 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-17 01:02:04.169409 | orchestrator | Tuesday 17 March 2026 00:59:38 +0000 (0:00:03.187) 0:00:07.992 ********* 2026-03-17 01:02:04.169417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-17 01:02:04.169428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:02:04.169440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:02:04.169448 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:04.169455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-17 01:02:04.169463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:02:04.169493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:02:04.169501 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:04.169514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-17 01:02:04.169530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:02:04.169538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:02:04.169546 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:04.169550 | orchestrator | 2026-03-17 01:02:04.169555 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-17 01:02:04.169560 | orchestrator | Tuesday 17 March 2026 00:59:39 +0000 (0:00:00.534) 0:00:08.527 ********* 2026-03-17 01:02:04.169565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-17 01:02:04.169570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:02:04.169579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:02:04.169584 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:04.169592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-17 01:02:04.169597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:02:04.169602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:02:04.169607 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:04.169612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-17 01:02:04.169621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:02:04.169626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:02:04.169630 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:04.169635 | orchestrator | 2026-03-17 01:02:04.169639 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-17 01:02:04.169646 | orchestrator | Tuesday 17 March 2026 00:59:39 +0000 (0:00:00.741) 0:00:09.269 ********* 2026-03-17 01:02:04.169651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 01:02:04.169656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 01:02:04.169668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 01:02:04.169673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:02:04.169681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:02:04.169686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:02:04.169691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:02:04.169696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:02:04.169703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:02:04.169708 | orchestrator | 2026-03-17 01:02:04.169713 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-17 01:02:04.169719 | orchestrator | Tuesday 17 March 2026 00:59:42 +0000 (0:00:03.147) 0:00:12.416 ********* 2026-03-17 01:02:04.169724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 01:02:04.169732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:02:04.169737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 01:02:04.169742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:02:04.169803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 01:02:04.169809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:02:04.169817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:02:04.169823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:02:04.169827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:02:04.169835 | orchestrator | 2026-03-17 01:02:04.169840 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-17 01:02:04.169844 | orchestrator | Tuesday 17 March 2026 00:59:47 +0000 (0:00:05.050) 0:00:17.467 ********* 2026-03-17 01:02:04.169848 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:04.169853 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:04.169858 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:04.169862 | orchestrator | 2026-03-17 01:02:04.169867 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-17 01:02:04.169872 | orchestrator | Tuesday 17 March 2026 00:59:49 +0000 (0:00:01.440) 0:00:18.907 ********* 2026-03-17 01:02:04.169876 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:04.169880 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:04.169885 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:04.169890 | orchestrator | 2026-03-17 01:02:04.169895 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-17 01:02:04.169899 | orchestrator | Tuesday 17 March 2026 00:59:49 +0000 (0:00:00.489) 0:00:19.397 ********* 2026-03-17 01:02:04.169904 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:04.169907 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:04.169911 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:04.169915 | orchestrator | 2026-03-17 01:02:04.169919 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-17 01:02:04.169923 | orchestrator | Tuesday 17 March 2026 00:59:50 +0000 (0:00:00.337) 0:00:19.735 ********* 2026-03-17 01:02:04.169927 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:04.169931 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:04.169934 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:04.169938 | orchestrator | 2026-03-17 01:02:04.169942 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-17 01:02:04.169946 | orchestrator | Tuesday 17 March 2026 00:59:50 +0000 (0:00:00.549) 0:00:20.284 ********* 2026-03-17 01:02:04.169952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-17 01:02:04.169959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:02:04.169963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:02:04.169970 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:04.169974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-17 01:02:04.169978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:02:04.169986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:02:04.169990 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:04.169997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-17 01:02:04.170002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:02:04.170009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:02:04.170049 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:04.170057 | orchestrator | 2026-03-17 01:02:04.170064 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-17 01:02:04.170069 | orchestrator | Tuesday 17 March 2026 00:59:51 +0000 (0:00:00.642) 0:00:20.927 ********* 2026-03-17 01:02:04.170075 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:04.170081 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:04.170087 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:04.170093 | orchestrator | 2026-03-17 01:02:04.170099 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-17 01:02:04.170105 | orchestrator | Tuesday 17 March 2026 00:59:51 +0000 (0:00:00.296) 0:00:21.224 ********* 2026-03-17 01:02:04.170110 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-17 01:02:04.170116 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-17 01:02:04.170122 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-17 01:02:04.170128 | orchestrator | 2026-03-17 01:02:04.170134 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-17 01:02:04.170140 | orchestrator | Tuesday 17 March 2026 00:59:53 +0000 (0:00:01.741) 0:00:22.965 ********* 2026-03-17 01:02:04.170147 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 01:02:04.170154 | orchestrator | 2026-03-17 01:02:04.170160 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-17 01:02:04.170166 | orchestrator | Tuesday 17 March 2026 00:59:54 +0000 (0:00:01.067) 0:00:24.033 ********* 2026-03-17 01:02:04.170171 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:04.170178 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:04.170184 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:04.170191 | orchestrator | 2026-03-17 01:02:04.170199 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-17 01:02:04.170203 | orchestrator | Tuesday 17 March 2026 00:59:55 +0000 (0:00:00.763) 0:00:24.796 ********* 2026-03-17 01:02:04.170207 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-17 01:02:04.170211 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 01:02:04.170214 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-17 01:02:04.170218 | orchestrator | 2026-03-17 01:02:04.170222 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-17 01:02:04.170226 | orchestrator | Tuesday 17 March 2026 00:59:56 +0000 (0:00:00.975) 0:00:25.771 ********* 2026-03-17 01:02:04.170230 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:04.170234 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:04.170238 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:04.170242 | orchestrator | 2026-03-17 01:02:04.170250 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-17 01:02:04.170254 | orchestrator | Tuesday 17 March 2026 00:59:56 +0000 (0:00:00.290) 0:00:26.062 ********* 2026-03-17 01:02:04.170257 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-17 01:02:04.170261 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-17 01:02:04.170265 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-17 01:02:04.170269 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-17 01:02:04.170280 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-17 01:02:04.170285 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-17 01:02:04.170289 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-17 01:02:04.170293 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-17 01:02:04.170296 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-17 01:02:04.170300 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-17 01:02:04.170305 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-17 01:02:04.170308 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-17 01:02:04.170312 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-17 01:02:04.170316 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-17 01:02:04.170320 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-17 01:02:04.170324 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-17 01:02:04.170328 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-17 01:02:04.170331 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-17 01:02:04.170335 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-17 01:02:04.170339 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-17 01:02:04.170343 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-17 01:02:04.170348 | orchestrator | 2026-03-17 01:02:04.170355 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-17 01:02:04.170361 | orchestrator | Tuesday 17 March 2026 01:00:05 +0000 (0:00:09.027) 0:00:35.090 ********* 2026-03-17 01:02:04.170367 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-17 01:02:04.170373 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-17 01:02:04.170380 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-17 01:02:04.170386 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-17 01:02:04.170392 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-17 01:02:04.170398 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-17 01:02:04.170405 | orchestrator | 2026-03-17 01:02:04.170411 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-03-17 01:02:04.170418 | orchestrator | Tuesday 17 March 2026 01:00:08 +0000 (0:00:02.753) 0:00:37.843 ********* 2026-03-17 01:02:04.170434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 01:02:04.170448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 01:02:04.170454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-17 01:02:04.170459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:02:04.170463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:02:04.170472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:02:04.170476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:02:04.170483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:02:04.170487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:02:04.170491 | orchestrator | 2026-03-17 01:02:04.170495 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-17 01:02:04.170499 | orchestrator | Tuesday 17 March 2026 01:00:10 +0000 (0:00:02.213) 0:00:40.057 ********* 2026-03-17 01:02:04.170503 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:04.170507 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:04.170511 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:04.170515 | orchestrator | 2026-03-17 01:02:04.170519 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-17 01:02:04.170523 | orchestrator | Tuesday 17 March 2026 01:00:10 +0000 (0:00:00.289) 0:00:40.346 ********* 2026-03-17 01:02:04.170527 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:04.170531 | orchestrator | 2026-03-17 01:02:04.170534 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-17 01:02:04.170539 | orchestrator | Tuesday 17 March 2026 01:00:13 +0000 (0:00:02.206) 0:00:42.553 ********* 2026-03-17 01:02:04.170546 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:04.170550 | orchestrator | 2026-03-17 01:02:04.170554 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-17 01:02:04.170558 | orchestrator | Tuesday 17 March 2026 01:00:15 +0000 (0:00:02.408) 0:00:44.962 ********* 2026-03-17 01:02:04.170562 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:04.170566 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:04.170570 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:04.170573 | orchestrator | 2026-03-17 01:02:04.170577 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-17 01:02:04.170581 | orchestrator | Tuesday 17 March 2026 01:00:16 +0000 (0:00:01.036) 0:00:45.999 ********* 2026-03-17 01:02:04.170587 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:04.170593 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:04.170600 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:04.170607 | orchestrator | 2026-03-17 01:02:04.170614 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-17 01:02:04.170621 | orchestrator | Tuesday 17 March 2026 01:00:16 +0000 (0:00:00.313) 0:00:46.312 ********* 2026-03-17 01:02:04.170627 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:04.170634 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:04.170641 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:04.170648 | orchestrator | 2026-03-17 01:02:04.170655 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-17 01:02:04.170661 | orchestrator | Tuesday 17 March 2026 01:00:17 +0000 (0:00:00.307) 0:00:46.620 ********* 2026-03-17 01:02:04.170668 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:04.170675 | orchestrator | 2026-03-17 01:02:04.170685 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-17 01:02:04.170692 | orchestrator | Tuesday 17 March 2026 01:00:32 +0000 (0:00:15.503) 0:01:02.123 ********* 2026-03-17 01:02:04.170698 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:04.170704 | orchestrator | 2026-03-17 01:02:04.170711 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-17 01:02:04.170717 | orchestrator | Tuesday 17 March 2026 01:00:44 +0000 (0:00:12.285) 0:01:14.409 ********* 2026-03-17 01:02:04.170723 | orchestrator | 2026-03-17 01:02:04.170729 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-17 01:02:04.170735 | orchestrator | Tuesday 17 March 2026 01:00:45 +0000 (0:00:00.081) 0:01:14.491 ********* 2026-03-17 01:02:04.170741 | orchestrator | 2026-03-17 01:02:04.170762 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-17 01:02:04.170769 | orchestrator | Tuesday 17 March 2026 01:00:45 +0000 (0:00:00.068) 0:01:14.560 ********* 2026-03-17 01:02:04.170775 | orchestrator | 2026-03-17 01:02:04.170782 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-17 01:02:04.170788 | orchestrator | Tuesday 17 March 2026 01:00:45 +0000 (0:00:00.065) 0:01:14.625 ********* 2026-03-17 01:02:04.170794 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:04.170801 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:04.170807 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:04.170813 | orchestrator | 2026-03-17 01:02:04.170816 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-17 01:02:04.170820 | orchestrator | Tuesday 17 March 2026 01:00:55 +0000 (0:00:10.404) 0:01:25.030 ********* 2026-03-17 01:02:04.170824 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:04.170828 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:04.170832 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:04.170836 | orchestrator | 2026-03-17 01:02:04.170843 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-17 01:02:04.170847 | orchestrator | Tuesday 17 March 2026 01:01:00 +0000 (0:00:04.562) 0:01:29.592 ********* 2026-03-17 01:02:04.170851 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:04.170856 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:04.170862 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:04.170874 | orchestrator | 2026-03-17 01:02:04.170880 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-17 01:02:04.170886 | orchestrator | Tuesday 17 March 2026 01:01:05 +0000 (0:00:05.626) 0:01:35.219 ********* 2026-03-17 01:02:04.170893 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:02:04.170899 | orchestrator | 2026-03-17 01:02:04.170906 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-17 01:02:04.170910 | orchestrator | Tuesday 17 March 2026 01:01:06 +0000 (0:00:00.755) 0:01:35.974 ********* 2026-03-17 01:02:04.170914 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:04.170918 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:04.170922 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:04.170926 | orchestrator | 2026-03-17 01:02:04.170930 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-17 01:02:04.170934 | orchestrator | Tuesday 17 March 2026 01:01:07 +0000 (0:00:00.839) 0:01:36.814 ********* 2026-03-17 01:02:04.170938 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:04.170942 | orchestrator | 2026-03-17 01:02:04.170945 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-17 01:02:04.170950 | orchestrator | Tuesday 17 March 2026 01:01:09 +0000 (0:00:01.756) 0:01:38.571 ********* 2026-03-17 01:02:04.170953 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-17 01:02:04.170957 | orchestrator | 2026-03-17 01:02:04.170961 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-03-17 01:02:04.170965 | orchestrator | Tuesday 17 March 2026 01:01:20 +0000 (0:00:11.853) 0:01:50.425 ********* 2026-03-17 01:02:04.170970 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-17 01:02:04.170977 | orchestrator | 2026-03-17 01:02:04.170981 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-03-17 01:02:04.170985 | orchestrator | Tuesday 17 March 2026 01:01:49 +0000 (0:00:28.881) 0:02:19.306 ********* 2026-03-17 01:02:04.170989 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-17 01:02:04.170993 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-17 01:02:04.170997 | orchestrator | 2026-03-17 01:02:04.171002 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-17 01:02:04.171008 | orchestrator | Tuesday 17 March 2026 01:01:58 +0000 (0:00:08.379) 0:02:27.686 ********* 2026-03-17 01:02:04.171012 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:04.171016 | orchestrator | 2026-03-17 01:02:04.171021 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-17 01:02:04.171027 | orchestrator | Tuesday 17 March 2026 01:01:58 +0000 (0:00:00.127) 0:02:27.813 ********* 2026-03-17 01:02:04.171034 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:04.171040 | orchestrator | 2026-03-17 01:02:04.171046 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-17 01:02:04.171053 | orchestrator | Tuesday 17 March 2026 01:01:58 +0000 (0:00:00.112) 0:02:27.926 ********* 2026-03-17 01:02:04.171060 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:04.171066 | orchestrator | 2026-03-17 01:02:04.171073 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-03-17 01:02:04.171077 | orchestrator | Tuesday 17 March 2026 01:01:58 +0000 (0:00:00.123) 0:02:28.049 ********* 2026-03-17 01:02:04.171081 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:04.171085 | orchestrator | 2026-03-17 01:02:04.171092 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-17 01:02:04.171098 | orchestrator | Tuesday 17 March 2026 01:01:59 +0000 (0:00:00.502) 0:02:28.552 ********* 2026-03-17 01:02:04.171105 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:04.171111 | orchestrator | 2026-03-17 01:02:04.171118 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-17 01:02:04.171133 | orchestrator | Tuesday 17 March 2026 01:02:02 +0000 (0:00:03.271) 0:02:31.824 ********* 2026-03-17 01:02:04.171141 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:04.171147 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:04.171153 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:04.171160 | orchestrator | 2026-03-17 01:02:04.171166 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:02:04.171173 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-17 01:02:04.171181 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-17 01:02:04.171188 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-17 01:02:04.171194 | orchestrator | 2026-03-17 01:02:04.171199 | orchestrator | 2026-03-17 01:02:04.171203 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:02:04.171207 | orchestrator | Tuesday 17 March 2026 01:02:02 +0000 (0:00:00.420) 0:02:32.244 ********* 2026-03-17 01:02:04.171211 | orchestrator | =============================================================================== 2026-03-17 01:02:04.171215 | orchestrator | service-ks-register : keystone | Creating services --------------------- 28.88s 2026-03-17 01:02:04.171219 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.50s 2026-03-17 01:02:04.171228 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 12.29s 2026-03-17 01:02:04.171232 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.85s 2026-03-17 01:02:04.171236 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 10.40s 2026-03-17 01:02:04.171239 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.03s 2026-03-17 01:02:04.171243 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 8.38s 2026-03-17 01:02:04.171247 | orchestrator | keystone : Restart keystone container ----------------------------------- 5.63s 2026-03-17 01:02:04.171251 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.05s 2026-03-17 01:02:04.171255 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 4.56s 2026-03-17 01:02:04.171258 | orchestrator | keystone : Creating default user role ----------------------------------- 3.27s 2026-03-17 01:02:04.171262 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.19s 2026-03-17 01:02:04.171266 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.15s 2026-03-17 01:02:04.171270 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.75s 2026-03-17 01:02:04.171274 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.41s 2026-03-17 01:02:04.171277 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.21s 2026-03-17 01:02:04.171281 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.21s 2026-03-17 01:02:04.171285 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.86s 2026-03-17 01:02:04.171289 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.76s 2026-03-17 01:02:04.171293 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.74s 2026-03-17 01:02:04.171296 | orchestrator | 2026-03-17 01:02:04 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:07.194299 | orchestrator | 2026-03-17 01:02:07 | INFO  | Task cc8d73d5-070e-4b63-a982-ac7b07b1506c is in state STARTED 2026-03-17 01:02:07.194381 | orchestrator | 2026-03-17 01:02:07 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:02:07.194399 | orchestrator | 2026-03-17 01:02:07 | INFO  | Task b783ee0f-755f-4924-8fde-ac1d139a763c is in state STARTED 2026-03-17 01:02:07.195032 | orchestrator | 2026-03-17 01:02:07 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:02:07.195659 | orchestrator | 2026-03-17 01:02:07 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:02:07.195686 | orchestrator | 2026-03-17 01:02:07 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:10.223282 | orchestrator | 2026-03-17 01:02:10 | INFO  | Task cc8d73d5-070e-4b63-a982-ac7b07b1506c is in state STARTED 2026-03-17 01:02:10.225382 | orchestrator | 2026-03-17 01:02:10 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:02:10.227425 | orchestrator | 2026-03-17 01:02:10 | INFO  | Task b783ee0f-755f-4924-8fde-ac1d139a763c is in state STARTED 2026-03-17 01:02:10.228944 | orchestrator | 2026-03-17 01:02:10 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:02:10.230578 | orchestrator | 2026-03-17 01:02:10 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:02:10.230907 | orchestrator | 2026-03-17 01:02:10 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:13.275141 | orchestrator | 2026-03-17 01:02:13 | INFO  | Task cc8d73d5-070e-4b63-a982-ac7b07b1506c is in state STARTED 2026-03-17 01:02:13.275212 | orchestrator | 2026-03-17 01:02:13 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:02:13.276695 | orchestrator | 2026-03-17 01:02:13 | INFO  | Task b783ee0f-755f-4924-8fde-ac1d139a763c is in state STARTED 2026-03-17 01:02:13.276731 | orchestrator | 2026-03-17 01:02:13 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:02:13.277427 | orchestrator | 2026-03-17 01:02:13 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:02:13.277456 | orchestrator | 2026-03-17 01:02:13 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:16.326346 | orchestrator | 2026-03-17 01:02:16 | INFO  | Task cc8d73d5-070e-4b63-a982-ac7b07b1506c is in state STARTED 2026-03-17 01:02:16.327639 | orchestrator | 2026-03-17 01:02:16 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:02:16.332688 | orchestrator | 2026-03-17 01:02:16 | INFO  | Task b783ee0f-755f-4924-8fde-ac1d139a763c is in state STARTED 2026-03-17 01:02:16.335727 | orchestrator | 2026-03-17 01:02:16 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:02:16.339522 | orchestrator | 2026-03-17 01:02:16 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:02:16.340087 | orchestrator | 2026-03-17 01:02:16 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:19.392408 | orchestrator | 2026-03-17 01:02:19 | INFO  | Task cc8d73d5-070e-4b63-a982-ac7b07b1506c is in state STARTED 2026-03-17 01:02:19.394187 | orchestrator | 2026-03-17 01:02:19 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:02:19.395779 | orchestrator | 2026-03-17 01:02:19 | INFO  | Task b783ee0f-755f-4924-8fde-ac1d139a763c is in state STARTED 2026-03-17 01:02:19.397418 | orchestrator | 2026-03-17 01:02:19 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:02:19.400854 | orchestrator | 2026-03-17 01:02:19 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:02:19.400920 | orchestrator | 2026-03-17 01:02:19 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:22.439999 | orchestrator | 2026-03-17 01:02:22 | INFO  | Task cc8d73d5-070e-4b63-a982-ac7b07b1506c is in state STARTED 2026-03-17 01:02:22.442670 | orchestrator | 2026-03-17 01:02:22 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:02:22.445139 | orchestrator | 2026-03-17 01:02:22 | INFO  | Task b783ee0f-755f-4924-8fde-ac1d139a763c is in state STARTED 2026-03-17 01:02:22.448873 | orchestrator | 2026-03-17 01:02:22 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:02:22.449440 | orchestrator | 2026-03-17 01:02:22 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:02:22.449475 | orchestrator | 2026-03-17 01:02:22 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:25.501687 | orchestrator | 2026-03-17 01:02:25 | INFO  | Task e86728f5-9420-4d96-8876-21f9dbdd00b6 is in state STARTED 2026-03-17 01:02:25.503290 | orchestrator | 2026-03-17 01:02:25 | INFO  | Task cc8d73d5-070e-4b63-a982-ac7b07b1506c is in state STARTED 2026-03-17 01:02:25.505175 | orchestrator | 2026-03-17 01:02:25 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:02:25.506828 | orchestrator | 2026-03-17 01:02:25 | INFO  | Task b783ee0f-755f-4924-8fde-ac1d139a763c is in state SUCCESS 2026-03-17 01:02:25.510851 | orchestrator | 2026-03-17 01:02:25 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:02:25.511651 | orchestrator | 2026-03-17 01:02:25 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:02:25.511694 | orchestrator | 2026-03-17 01:02:25 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:28.557778 | orchestrator | 2026-03-17 01:02:28 | INFO  | Task e86728f5-9420-4d96-8876-21f9dbdd00b6 is in state STARTED 2026-03-17 01:02:28.557922 | orchestrator | 2026-03-17 01:02:28 | INFO  | Task cc8d73d5-070e-4b63-a982-ac7b07b1506c is in state STARTED 2026-03-17 01:02:28.558897 | orchestrator | 2026-03-17 01:02:28 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:02:28.560561 | orchestrator | 2026-03-17 01:02:28 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:02:28.562042 | orchestrator | 2026-03-17 01:02:28 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:02:28.563376 | orchestrator | 2026-03-17 01:02:28 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:31.603301 | orchestrator | 2026-03-17 01:02:31 | INFO  | Task e86728f5-9420-4d96-8876-21f9dbdd00b6 is in state STARTED 2026-03-17 01:02:31.603351 | orchestrator | 2026-03-17 01:02:31 | INFO  | Task cc8d73d5-070e-4b63-a982-ac7b07b1506c is in state STARTED 2026-03-17 01:02:31.603611 | orchestrator | 2026-03-17 01:02:31 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:02:31.604498 | orchestrator | 2026-03-17 01:02:31 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:02:31.604973 | orchestrator | 2026-03-17 01:02:31 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:02:31.604997 | orchestrator | 2026-03-17 01:02:31 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:34.652067 | orchestrator | 2026-03-17 01:02:34 | INFO  | Task e86728f5-9420-4d96-8876-21f9dbdd00b6 is in state STARTED 2026-03-17 01:02:34.653355 | orchestrator | 2026-03-17 01:02:34 | INFO  | Task cc8d73d5-070e-4b63-a982-ac7b07b1506c is in state STARTED 2026-03-17 01:02:34.655057 | orchestrator | 2026-03-17 01:02:34 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:02:34.656238 | orchestrator | 2026-03-17 01:02:34 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:02:34.657631 | orchestrator | 2026-03-17 01:02:34 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:02:34.657667 | orchestrator | 2026-03-17 01:02:34 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:37.688502 | orchestrator | 2026-03-17 01:02:37 | INFO  | Task e86728f5-9420-4d96-8876-21f9dbdd00b6 is in state STARTED 2026-03-17 01:02:37.689367 | orchestrator | 2026-03-17 01:02:37 | INFO  | Task cc8d73d5-070e-4b63-a982-ac7b07b1506c is in state STARTED 2026-03-17 01:02:37.690976 | orchestrator | 2026-03-17 01:02:37 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:02:37.691636 | orchestrator | 2026-03-17 01:02:37 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:02:37.692663 | orchestrator | 2026-03-17 01:02:37 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:02:37.692688 | orchestrator | 2026-03-17 01:02:37 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:40.732049 | orchestrator | 2026-03-17 01:02:40 | INFO  | Task e86728f5-9420-4d96-8876-21f9dbdd00b6 is in state STARTED 2026-03-17 01:02:40.734379 | orchestrator | 2026-03-17 01:02:40 | INFO  | Task cc8d73d5-070e-4b63-a982-ac7b07b1506c is in state STARTED 2026-03-17 01:02:40.736506 | orchestrator | 2026-03-17 01:02:40 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:02:40.738430 | orchestrator | 2026-03-17 01:02:40 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:02:40.740091 | orchestrator | 2026-03-17 01:02:40 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:02:40.740139 | orchestrator | 2026-03-17 01:02:40 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:43.772410 | orchestrator | 2026-03-17 01:02:43 | INFO  | Task e86728f5-9420-4d96-8876-21f9dbdd00b6 is in state STARTED 2026-03-17 01:02:43.774842 | orchestrator | 2026-03-17 01:02:43 | INFO  | Task cc8d73d5-070e-4b63-a982-ac7b07b1506c is in state STARTED 2026-03-17 01:02:43.777004 | orchestrator | 2026-03-17 01:02:43 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:02:43.778328 | orchestrator | 2026-03-17 01:02:43 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:02:43.779525 | orchestrator | 2026-03-17 01:02:43 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:02:43.779792 | orchestrator | 2026-03-17 01:02:43 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:46.819402 | orchestrator | 2026-03-17 01:02:46 | INFO  | Task e86728f5-9420-4d96-8876-21f9dbdd00b6 is in state STARTED 2026-03-17 01:02:46.819464 | orchestrator | 2026-03-17 01:02:46 | INFO  | Task cc8d73d5-070e-4b63-a982-ac7b07b1506c is in state STARTED 2026-03-17 01:02:46.820877 | orchestrator | 2026-03-17 01:02:46 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:02:46.821531 | orchestrator | 2026-03-17 01:02:46 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:02:46.822874 | orchestrator | 2026-03-17 01:02:46 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:02:46.822910 | orchestrator | 2026-03-17 01:02:46 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:49.856816 | orchestrator | 2026-03-17 01:02:49 | INFO  | Task e86728f5-9420-4d96-8876-21f9dbdd00b6 is in state STARTED 2026-03-17 01:02:49.857599 | orchestrator | 2026-03-17 01:02:49 | INFO  | Task cc8d73d5-070e-4b63-a982-ac7b07b1506c is in state STARTED 2026-03-17 01:02:49.862536 | orchestrator | 2026-03-17 01:02:49 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:02:49.863354 | orchestrator | 2026-03-17 01:02:49 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:02:49.864024 | orchestrator | 2026-03-17 01:02:49 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:02:49.864059 | orchestrator | 2026-03-17 01:02:49 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:52.888971 | orchestrator | 2026-03-17 01:02:52 | INFO  | Task e86728f5-9420-4d96-8876-21f9dbdd00b6 is in state STARTED 2026-03-17 01:02:52.889208 | orchestrator | 2026-03-17 01:02:52 | INFO  | Task cc8d73d5-070e-4b63-a982-ac7b07b1506c is in state STARTED 2026-03-17 01:02:52.889791 | orchestrator | 2026-03-17 01:02:52 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:02:52.890450 | orchestrator | 2026-03-17 01:02:52 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:02:52.890852 | orchestrator | 2026-03-17 01:02:52 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:02:52.890951 | orchestrator | 2026-03-17 01:02:52 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:55.922786 | orchestrator | 2026-03-17 01:02:55 | INFO  | Task e86728f5-9420-4d96-8876-21f9dbdd00b6 is in state STARTED 2026-03-17 01:02:55.922879 | orchestrator | 2026-03-17 01:02:55 | INFO  | Task cc8d73d5-070e-4b63-a982-ac7b07b1506c is in state STARTED 2026-03-17 01:02:55.922889 | orchestrator | 2026-03-17 01:02:55 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:02:55.923875 | orchestrator | 2026-03-17 01:02:55 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:02:55.924559 | orchestrator | 2026-03-17 01:02:55 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:02:55.924598 | orchestrator | 2026-03-17 01:02:55 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:58.947665 | orchestrator | 2026-03-17 01:02:58 | INFO  | Task e86728f5-9420-4d96-8876-21f9dbdd00b6 is in state STARTED 2026-03-17 01:02:58.947825 | orchestrator | 2026-03-17 01:02:58 | INFO  | Task cc8d73d5-070e-4b63-a982-ac7b07b1506c is in state STARTED 2026-03-17 01:02:58.948490 | orchestrator | 2026-03-17 01:02:58 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:02:58.949078 | orchestrator | 2026-03-17 01:02:58 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:02:58.949717 | orchestrator | 2026-03-17 01:02:58 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:02:58.949740 | orchestrator | 2026-03-17 01:02:58 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:01.970602 | orchestrator | 2026-03-17 01:03:01 | INFO  | Task e86728f5-9420-4d96-8876-21f9dbdd00b6 is in state STARTED 2026-03-17 01:03:01.970843 | orchestrator | 2026-03-17 01:03:01 | INFO  | Task cc8d73d5-070e-4b63-a982-ac7b07b1506c is in state STARTED 2026-03-17 01:03:01.971334 | orchestrator | 2026-03-17 01:03:01 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:03:01.971924 | orchestrator | 2026-03-17 01:03:01 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:03:01.972500 | orchestrator | 2026-03-17 01:03:01 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:03:01.972533 | orchestrator | 2026-03-17 01:03:01 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:04.995818 | orchestrator | 2026-03-17 01:03:04 | INFO  | Task e86728f5-9420-4d96-8876-21f9dbdd00b6 is in state STARTED 2026-03-17 01:03:04.995957 | orchestrator | 2026-03-17 01:03:04 | INFO  | Task cc8d73d5-070e-4b63-a982-ac7b07b1506c is in state STARTED 2026-03-17 01:03:04.996485 | orchestrator | 2026-03-17 01:03:04 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:03:04.996929 | orchestrator | 2026-03-17 01:03:04 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:03:04.997599 | orchestrator | 2026-03-17 01:03:04 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:03:04.997640 | orchestrator | 2026-03-17 01:03:04 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:08.019342 | orchestrator | 2026-03-17 01:03:08 | INFO  | Task e86728f5-9420-4d96-8876-21f9dbdd00b6 is in state STARTED 2026-03-17 01:03:08.019579 | orchestrator | 2026-03-17 01:03:08 | INFO  | Task cc8d73d5-070e-4b63-a982-ac7b07b1506c is in state STARTED 2026-03-17 01:03:08.020727 | orchestrator | 2026-03-17 01:03:08 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:03:08.021285 | orchestrator | 2026-03-17 01:03:08 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:03:08.021766 | orchestrator | 2026-03-17 01:03:08 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:03:08.021982 | orchestrator | 2026-03-17 01:03:08 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:11.073187 | orchestrator | 2026-03-17 01:03:11 | INFO  | Task e86728f5-9420-4d96-8876-21f9dbdd00b6 is in state STARTED 2026-03-17 01:03:11.073498 | orchestrator | 2026-03-17 01:03:11 | INFO  | Task cc8d73d5-070e-4b63-a982-ac7b07b1506c is in state STARTED 2026-03-17 01:03:11.074077 | orchestrator | 2026-03-17 01:03:11 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:03:11.074677 | orchestrator | 2026-03-17 01:03:11 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:03:11.075889 | orchestrator | 2026-03-17 01:03:11 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:03:11.075922 | orchestrator | 2026-03-17 01:03:11 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:14.092142 | orchestrator | 2026-03-17 01:03:14 | INFO  | Task e86728f5-9420-4d96-8876-21f9dbdd00b6 is in state STARTED 2026-03-17 01:03:14.093356 | orchestrator | 2026-03-17 01:03:14 | INFO  | Task cc8d73d5-070e-4b63-a982-ac7b07b1506c is in state STARTED 2026-03-17 01:03:14.094845 | orchestrator | 2026-03-17 01:03:14 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:03:14.095476 | orchestrator | 2026-03-17 01:03:14 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:03:14.096140 | orchestrator | 2026-03-17 01:03:14 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:03:14.096163 | orchestrator | 2026-03-17 01:03:14 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:17.115566 | orchestrator | 2026-03-17 01:03:17 | INFO  | Task e86728f5-9420-4d96-8876-21f9dbdd00b6 is in state STARTED 2026-03-17 01:03:17.115988 | orchestrator | 2026-03-17 01:03:17 | INFO  | Task cc8d73d5-070e-4b63-a982-ac7b07b1506c is in state STARTED 2026-03-17 01:03:17.116761 | orchestrator | 2026-03-17 01:03:17 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:03:17.117199 | orchestrator | 2026-03-17 01:03:17 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:03:17.117952 | orchestrator | 2026-03-17 01:03:17 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:03:17.117999 | orchestrator | 2026-03-17 01:03:17 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:20.137977 | orchestrator | 2026-03-17 01:03:20 | INFO  | Task e86728f5-9420-4d96-8876-21f9dbdd00b6 is in state STARTED 2026-03-17 01:03:20.138133 | orchestrator | 2026-03-17 01:03:20 | INFO  | Task cc8d73d5-070e-4b63-a982-ac7b07b1506c is in state STARTED 2026-03-17 01:03:20.138733 | orchestrator | 2026-03-17 01:03:20 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:03:20.139098 | orchestrator | 2026-03-17 01:03:20 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:03:20.139739 | orchestrator | 2026-03-17 01:03:20 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:03:20.139763 | orchestrator | 2026-03-17 01:03:20 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:23.161763 | orchestrator | 2026-03-17 01:03:23 | INFO  | Task e86728f5-9420-4d96-8876-21f9dbdd00b6 is in state STARTED 2026-03-17 01:03:23.162351 | orchestrator | 2026-03-17 01:03:23 | INFO  | Task cc8d73d5-070e-4b63-a982-ac7b07b1506c is in state STARTED 2026-03-17 01:03:23.163834 | orchestrator | 2026-03-17 01:03:23 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:03:23.164529 | orchestrator | 2026-03-17 01:03:23 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:03:23.165345 | orchestrator | 2026-03-17 01:03:23 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:03:23.165373 | orchestrator | 2026-03-17 01:03:23 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:26.194949 | orchestrator | 2026-03-17 01:03:26 | INFO  | Task e86728f5-9420-4d96-8876-21f9dbdd00b6 is in state STARTED 2026-03-17 01:03:26.195020 | orchestrator | 2026-03-17 01:03:26 | INFO  | Task cc8d73d5-070e-4b63-a982-ac7b07b1506c is in state SUCCESS 2026-03-17 01:03:26.195540 | orchestrator | 2026-03-17 01:03:26 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:03:26.196045 | orchestrator | 2026-03-17 01:03:26 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:03:26.197020 | orchestrator | 2026-03-17 01:03:26 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:03:26.197074 | orchestrator | 2026-03-17 01:03:26 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:29.228479 | orchestrator | 2026-03-17 01:03:29 | INFO  | Task e86728f5-9420-4d96-8876-21f9dbdd00b6 is in state STARTED 2026-03-17 01:03:29.228568 | orchestrator | 2026-03-17 01:03:29 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:03:29.229038 | orchestrator | 2026-03-17 01:03:29 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:03:29.229917 | orchestrator | 2026-03-17 01:03:29 | INFO  | Task b0f6dea4-ab19-4194-824f-a084702bbba2 is in state STARTED 2026-03-17 01:03:29.230591 | orchestrator | 2026-03-17 01:03:29 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:03:29.230640 | orchestrator | 2026-03-17 01:03:29 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:32.258255 | orchestrator | 2026-03-17 01:03:32 | INFO  | Task e86728f5-9420-4d96-8876-21f9dbdd00b6 is in state STARTED 2026-03-17 01:03:32.258393 | orchestrator | 2026-03-17 01:03:32 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:03:32.260478 | orchestrator | 2026-03-17 01:03:32 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:03:32.260543 | orchestrator | 2026-03-17 01:03:32 | INFO  | Task b0f6dea4-ab19-4194-824f-a084702bbba2 is in state STARTED 2026-03-17 01:03:32.260548 | orchestrator | 2026-03-17 01:03:32 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:03:32.260553 | orchestrator | 2026-03-17 01:03:32 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:35.282506 | orchestrator | 2026-03-17 01:03:35 | INFO  | Task e86728f5-9420-4d96-8876-21f9dbdd00b6 is in state STARTED 2026-03-17 01:03:35.283012 | orchestrator | 2026-03-17 01:03:35 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:03:35.283712 | orchestrator | 2026-03-17 01:03:35 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:03:35.284324 | orchestrator | 2026-03-17 01:03:35 | INFO  | Task b0f6dea4-ab19-4194-824f-a084702bbba2 is in state STARTED 2026-03-17 01:03:35.284916 | orchestrator | 2026-03-17 01:03:35 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:03:35.285003 | orchestrator | 2026-03-17 01:03:35 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:38.313418 | orchestrator | 2026-03-17 01:03:38 | INFO  | Task e86728f5-9420-4d96-8876-21f9dbdd00b6 is in state STARTED 2026-03-17 01:03:38.313527 | orchestrator | 2026-03-17 01:03:38 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:03:38.313537 | orchestrator | 2026-03-17 01:03:38 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:03:38.314546 | orchestrator | 2026-03-17 01:03:38 | INFO  | Task b0f6dea4-ab19-4194-824f-a084702bbba2 is in state STARTED 2026-03-17 01:03:38.315247 | orchestrator | 2026-03-17 01:03:38 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:03:38.315295 | orchestrator | 2026-03-17 01:03:38 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:41.338413 | orchestrator | 2026-03-17 01:03:41 | INFO  | Task e86728f5-9420-4d96-8876-21f9dbdd00b6 is in state STARTED 2026-03-17 01:03:41.338500 | orchestrator | 2026-03-17 01:03:41 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:03:41.338516 | orchestrator | 2026-03-17 01:03:41 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:03:41.338525 | orchestrator | 2026-03-17 01:03:41 | INFO  | Task b0f6dea4-ab19-4194-824f-a084702bbba2 is in state STARTED 2026-03-17 01:03:41.338534 | orchestrator | 2026-03-17 01:03:41 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:03:41.338542 | orchestrator | 2026-03-17 01:03:41 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:44.368166 | orchestrator | 2026-03-17 01:03:44 | INFO  | Task e86728f5-9420-4d96-8876-21f9dbdd00b6 is in state STARTED 2026-03-17 01:03:44.368260 | orchestrator | 2026-03-17 01:03:44 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:03:44.369019 | orchestrator | 2026-03-17 01:03:44 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:03:44.369349 | orchestrator | 2026-03-17 01:03:44 | INFO  | Task b0f6dea4-ab19-4194-824f-a084702bbba2 is in state STARTED 2026-03-17 01:03:44.369855 | orchestrator | 2026-03-17 01:03:44 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:03:44.369872 | orchestrator | 2026-03-17 01:03:44 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:47.396064 | orchestrator | 2026-03-17 01:03:47 | INFO  | Task e86728f5-9420-4d96-8876-21f9dbdd00b6 is in state STARTED 2026-03-17 01:03:47.396409 | orchestrator | 2026-03-17 01:03:47 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:03:47.397318 | orchestrator | 2026-03-17 01:03:47 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:03:47.398284 | orchestrator | 2026-03-17 01:03:47 | INFO  | Task b0f6dea4-ab19-4194-824f-a084702bbba2 is in state STARTED 2026-03-17 01:03:47.398912 | orchestrator | 2026-03-17 01:03:47 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:03:47.398952 | orchestrator | 2026-03-17 01:03:47 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:50.438292 | orchestrator | 2026-03-17 01:03:50 | INFO  | Task e86728f5-9420-4d96-8876-21f9dbdd00b6 is in state STARTED 2026-03-17 01:03:50.440446 | orchestrator | 2026-03-17 01:03:50 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:03:50.441945 | orchestrator | 2026-03-17 01:03:50 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:03:50.443492 | orchestrator | 2026-03-17 01:03:50 | INFO  | Task b0f6dea4-ab19-4194-824f-a084702bbba2 is in state STARTED 2026-03-17 01:03:50.444721 | orchestrator | 2026-03-17 01:03:50 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:03:50.444785 | orchestrator | 2026-03-17 01:03:50 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:53.474830 | orchestrator | 2026-03-17 01:03:53 | INFO  | Task e86728f5-9420-4d96-8876-21f9dbdd00b6 is in state STARTED 2026-03-17 01:03:53.477040 | orchestrator | 2026-03-17 01:03:53 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:03:53.479002 | orchestrator | 2026-03-17 01:03:53 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:03:53.482371 | orchestrator | 2026-03-17 01:03:53 | INFO  | Task b0f6dea4-ab19-4194-824f-a084702bbba2 is in state STARTED 2026-03-17 01:03:53.485658 | orchestrator | 2026-03-17 01:03:53 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:03:53.485711 | orchestrator | 2026-03-17 01:03:53 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:56.519048 | orchestrator | 2026-03-17 01:03:56 | INFO  | Task e86728f5-9420-4d96-8876-21f9dbdd00b6 is in state STARTED 2026-03-17 01:03:56.519181 | orchestrator | 2026-03-17 01:03:56 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:03:56.520031 | orchestrator | 2026-03-17 01:03:56 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:03:56.521083 | orchestrator | 2026-03-17 01:03:56 | INFO  | Task b0f6dea4-ab19-4194-824f-a084702bbba2 is in state STARTED 2026-03-17 01:03:56.522083 | orchestrator | 2026-03-17 01:03:56 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:03:56.522115 | orchestrator | 2026-03-17 01:03:56 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:59.553540 | orchestrator | 2026-03-17 01:03:59 | INFO  | Task e86728f5-9420-4d96-8876-21f9dbdd00b6 is in state STARTED 2026-03-17 01:03:59.553833 | orchestrator | 2026-03-17 01:03:59 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:03:59.554418 | orchestrator | 2026-03-17 01:03:59 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:03:59.555395 | orchestrator | 2026-03-17 01:03:59 | INFO  | Task b0f6dea4-ab19-4194-824f-a084702bbba2 is in state STARTED 2026-03-17 01:03:59.556314 | orchestrator | 2026-03-17 01:03:59 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:03:59.556353 | orchestrator | 2026-03-17 01:03:59 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:02.578533 | orchestrator | 2026-03-17 01:04:02 | INFO  | Task e86728f5-9420-4d96-8876-21f9dbdd00b6 is in state SUCCESS 2026-03-17 01:04:02.578845 | orchestrator | 2026-03-17 01:04:02.578866 | orchestrator | 2026-03-17 01:04:02.578872 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-17 01:04:02.578878 | orchestrator | 2026-03-17 01:04:02.578883 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-17 01:04:02.578888 | orchestrator | Tuesday 17 March 2026 01:01:29 +0000 (0:00:00.224) 0:00:00.224 ********* 2026-03-17 01:04:02.578894 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-17 01:04:02.578900 | orchestrator | 2026-03-17 01:04:02.578905 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-17 01:04:02.578910 | orchestrator | Tuesday 17 March 2026 01:01:30 +0000 (0:00:00.232) 0:00:00.456 ********* 2026-03-17 01:04:02.578915 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-17 01:04:02.578920 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-17 01:04:02.578925 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-17 01:04:02.578930 | orchestrator | 2026-03-17 01:04:02.578935 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-17 01:04:02.578940 | orchestrator | Tuesday 17 March 2026 01:01:31 +0000 (0:00:01.273) 0:00:01.729 ********* 2026-03-17 01:04:02.578944 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-17 01:04:02.578949 | orchestrator | 2026-03-17 01:04:02.578954 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-17 01:04:02.578959 | orchestrator | Tuesday 17 March 2026 01:01:32 +0000 (0:00:01.370) 0:00:03.100 ********* 2026-03-17 01:04:02.578964 | orchestrator | changed: [testbed-manager] 2026-03-17 01:04:02.578969 | orchestrator | 2026-03-17 01:04:02.578974 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-17 01:04:02.578979 | orchestrator | Tuesday 17 March 2026 01:01:33 +0000 (0:00:00.868) 0:00:03.968 ********* 2026-03-17 01:04:02.578983 | orchestrator | changed: [testbed-manager] 2026-03-17 01:04:02.578988 | orchestrator | 2026-03-17 01:04:02.578993 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-17 01:04:02.578998 | orchestrator | Tuesday 17 March 2026 01:01:34 +0000 (0:00:00.892) 0:00:04.861 ********* 2026-03-17 01:04:02.579003 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-17 01:04:02.579008 | orchestrator | ok: [testbed-manager] 2026-03-17 01:04:02.579013 | orchestrator | 2026-03-17 01:04:02.579018 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-17 01:04:02.579022 | orchestrator | Tuesday 17 March 2026 01:02:14 +0000 (0:00:40.232) 0:00:45.094 ********* 2026-03-17 01:04:02.579027 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-17 01:04:02.579032 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-17 01:04:02.579037 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-17 01:04:02.579042 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-17 01:04:02.579047 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-17 01:04:02.579052 | orchestrator | 2026-03-17 01:04:02.579057 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-17 01:04:02.579062 | orchestrator | Tuesday 17 March 2026 01:02:18 +0000 (0:00:04.039) 0:00:49.133 ********* 2026-03-17 01:04:02.579066 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-17 01:04:02.579071 | orchestrator | 2026-03-17 01:04:02.579076 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-17 01:04:02.579081 | orchestrator | Tuesday 17 March 2026 01:02:19 +0000 (0:00:00.463) 0:00:49.597 ********* 2026-03-17 01:04:02.579086 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:04:02.579102 | orchestrator | 2026-03-17 01:04:02.579111 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-17 01:04:02.579121 | orchestrator | Tuesday 17 March 2026 01:02:19 +0000 (0:00:00.136) 0:00:49.734 ********* 2026-03-17 01:04:02.579134 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:04:02.579142 | orchestrator | 2026-03-17 01:04:02.579150 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-17 01:04:02.579160 | orchestrator | Tuesday 17 March 2026 01:02:19 +0000 (0:00:00.469) 0:00:50.204 ********* 2026-03-17 01:04:02.579169 | orchestrator | changed: [testbed-manager] 2026-03-17 01:04:02.579177 | orchestrator | 2026-03-17 01:04:02.579186 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-17 01:04:02.579195 | orchestrator | Tuesday 17 March 2026 01:02:21 +0000 (0:00:01.355) 0:00:51.559 ********* 2026-03-17 01:04:02.579203 | orchestrator | changed: [testbed-manager] 2026-03-17 01:04:02.579209 | orchestrator | 2026-03-17 01:04:02.579214 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-17 01:04:02.579218 | orchestrator | Tuesday 17 March 2026 01:02:21 +0000 (0:00:00.777) 0:00:52.337 ********* 2026-03-17 01:04:02.579231 | orchestrator | changed: [testbed-manager] 2026-03-17 01:04:02.579236 | orchestrator | 2026-03-17 01:04:02.579241 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-17 01:04:02.579246 | orchestrator | Tuesday 17 March 2026 01:02:22 +0000 (0:00:00.512) 0:00:52.849 ********* 2026-03-17 01:04:02.579250 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-17 01:04:02.579255 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-17 01:04:02.579260 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-17 01:04:02.579265 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-17 01:04:02.579270 | orchestrator | 2026-03-17 01:04:02.579274 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:04:02.579279 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 01:04:02.579285 | orchestrator | 2026-03-17 01:04:02.579289 | orchestrator | 2026-03-17 01:04:02.579301 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:04:02.579306 | orchestrator | Tuesday 17 March 2026 01:02:23 +0000 (0:00:01.411) 0:00:54.261 ********* 2026-03-17 01:04:02.579311 | orchestrator | =============================================================================== 2026-03-17 01:04:02.579316 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 40.23s 2026-03-17 01:04:02.579321 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.04s 2026-03-17 01:04:02.579325 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.41s 2026-03-17 01:04:02.579330 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.37s 2026-03-17 01:04:02.579335 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.36s 2026-03-17 01:04:02.579340 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.27s 2026-03-17 01:04:02.579344 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.89s 2026-03-17 01:04:02.579349 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.87s 2026-03-17 01:04:02.579354 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.78s 2026-03-17 01:04:02.579358 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.51s 2026-03-17 01:04:02.579363 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.47s 2026-03-17 01:04:02.579368 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.46s 2026-03-17 01:04:02.579373 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.23s 2026-03-17 01:04:02.579378 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2026-03-17 01:04:02.579387 | orchestrator | 2026-03-17 01:04:02.579392 | orchestrator | 2026-03-17 01:04:02.579397 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-03-17 01:04:02.579401 | orchestrator | 2026-03-17 01:04:02.579406 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-03-17 01:04:02.579411 | orchestrator | Tuesday 17 March 2026 01:02:07 +0000 (0:00:00.081) 0:00:00.081 ********* 2026-03-17 01:04:02.579416 | orchestrator | changed: [localhost] 2026-03-17 01:04:02.579421 | orchestrator | 2026-03-17 01:04:02.579425 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-03-17 01:04:02.579430 | orchestrator | Tuesday 17 March 2026 01:02:07 +0000 (0:00:00.806) 0:00:00.887 ********* 2026-03-17 01:04:02.579435 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2026-03-17 01:04:02.579440 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (2 retries left). 2026-03-17 01:04:02.579444 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (1 retries left). 2026-03-17 01:04:02.579451 | orchestrator | fatal: [localhost]: FAILED! => {"attempts": 3, "changed": false, "dest": "/share/ironic/ironic/ironic-agent.initramfs", "elapsed": 10, "msg": "Request failed: ", "url": "https://tarballs.opendev.org/openstack/ironic-python-agent/dib/files/ipa-centos9-stable-2024.2.initramfs"} 2026-03-17 01:04:02.579457 | orchestrator | 2026-03-17 01:04:02.579461 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:04:02.579466 | orchestrator | localhost : ok=1  changed=1  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-03-17 01:04:02.579471 | orchestrator | 2026-03-17 01:04:02.579477 | orchestrator | 2026-03-17 01:04:02.579483 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:04:02.579489 | orchestrator | Tuesday 17 March 2026 01:03:25 +0000 (0:01:18.052) 0:01:18.940 ********* 2026-03-17 01:04:02.579494 | orchestrator | =============================================================================== 2026-03-17 01:04:02.579500 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 78.05s 2026-03-17 01:04:02.579505 | orchestrator | Ensure the destination directory exists --------------------------------- 0.81s 2026-03-17 01:04:02.579511 | orchestrator | 2026-03-17 01:04:02.579517 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-17 01:04:02.579523 | orchestrator | 2.16.14 2026-03-17 01:04:02.579528 | orchestrator | 2026-03-17 01:04:02.579534 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-17 01:04:02.579539 | orchestrator | 2026-03-17 01:04:02.579545 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-17 01:04:02.579550 | orchestrator | Tuesday 17 March 2026 01:02:28 +0000 (0:00:00.248) 0:00:00.248 ********* 2026-03-17 01:04:02.579556 | orchestrator | changed: [testbed-manager] 2026-03-17 01:04:02.579561 | orchestrator | 2026-03-17 01:04:02.579570 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-17 01:04:02.579576 | orchestrator | Tuesday 17 March 2026 01:02:29 +0000 (0:00:01.587) 0:00:01.835 ********* 2026-03-17 01:04:02.579581 | orchestrator | changed: [testbed-manager] 2026-03-17 01:04:02.579587 | orchestrator | 2026-03-17 01:04:02.579593 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-17 01:04:02.579598 | orchestrator | Tuesday 17 March 2026 01:02:30 +0000 (0:00:00.918) 0:00:02.754 ********* 2026-03-17 01:04:02.579604 | orchestrator | changed: [testbed-manager] 2026-03-17 01:04:02.579609 | orchestrator | 2026-03-17 01:04:02.579615 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-17 01:04:02.579620 | orchestrator | Tuesday 17 March 2026 01:02:31 +0000 (0:00:00.897) 0:00:03.651 ********* 2026-03-17 01:04:02.579626 | orchestrator | changed: [testbed-manager] 2026-03-17 01:04:02.579683 | orchestrator | 2026-03-17 01:04:02.579690 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-17 01:04:02.579704 | orchestrator | Tuesday 17 March 2026 01:02:32 +0000 (0:00:01.045) 0:00:04.697 ********* 2026-03-17 01:04:02.579710 | orchestrator | changed: [testbed-manager] 2026-03-17 01:04:02.579715 | orchestrator | 2026-03-17 01:04:02.579721 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-17 01:04:02.579727 | orchestrator | Tuesday 17 March 2026 01:02:33 +0000 (0:00:00.915) 0:00:05.612 ********* 2026-03-17 01:04:02.579733 | orchestrator | changed: [testbed-manager] 2026-03-17 01:04:02.579739 | orchestrator | 2026-03-17 01:04:02.579745 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-17 01:04:02.579751 | orchestrator | Tuesday 17 March 2026 01:02:34 +0000 (0:00:00.952) 0:00:06.565 ********* 2026-03-17 01:04:02.579757 | orchestrator | changed: [testbed-manager] 2026-03-17 01:04:02.579762 | orchestrator | 2026-03-17 01:04:02.579768 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-17 01:04:02.579774 | orchestrator | Tuesday 17 March 2026 01:02:35 +0000 (0:00:01.119) 0:00:07.685 ********* 2026-03-17 01:04:02.579779 | orchestrator | changed: [testbed-manager] 2026-03-17 01:04:02.579784 | orchestrator | 2026-03-17 01:04:02.579789 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-17 01:04:02.579794 | orchestrator | Tuesday 17 March 2026 01:02:36 +0000 (0:00:01.036) 0:00:08.722 ********* 2026-03-17 01:04:02.579828 | orchestrator | changed: [testbed-manager] 2026-03-17 01:04:02.579834 | orchestrator | 2026-03-17 01:04:02.579839 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-17 01:04:02.579844 | orchestrator | Tuesday 17 March 2026 01:03:37 +0000 (0:01:00.296) 0:01:09.018 ********* 2026-03-17 01:04:02.579849 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:04:02.579854 | orchestrator | 2026-03-17 01:04:02.579932 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-17 01:04:02.579938 | orchestrator | 2026-03-17 01:04:02.579943 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-17 01:04:02.579948 | orchestrator | Tuesday 17 March 2026 01:03:37 +0000 (0:00:00.137) 0:01:09.156 ********* 2026-03-17 01:04:02.579953 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:04:02.579957 | orchestrator | 2026-03-17 01:04:02.579962 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-17 01:04:02.579967 | orchestrator | 2026-03-17 01:04:02.579972 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-17 01:04:02.579977 | orchestrator | Tuesday 17 March 2026 01:03:48 +0000 (0:00:11.626) 0:01:20.782 ********* 2026-03-17 01:04:02.579982 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:04:02.579987 | orchestrator | 2026-03-17 01:04:02.579991 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-17 01:04:02.579996 | orchestrator | 2026-03-17 01:04:02.580001 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-17 01:04:02.580006 | orchestrator | Tuesday 17 March 2026 01:03:49 +0000 (0:00:01.035) 0:01:21.817 ********* 2026-03-17 01:04:02.580011 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:04:02.580016 | orchestrator | 2026-03-17 01:04:02.580021 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:04:02.580026 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 01:04:02.580031 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:04:02.580036 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:04:02.580041 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:04:02.580051 | orchestrator | 2026-03-17 01:04:02.580056 | orchestrator | 2026-03-17 01:04:02.580060 | orchestrator | 2026-03-17 01:04:02.580065 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:04:02.580070 | orchestrator | Tuesday 17 March 2026 01:04:00 +0000 (0:00:11.065) 0:01:32.883 ********* 2026-03-17 01:04:02.580075 | orchestrator | =============================================================================== 2026-03-17 01:04:02.580080 | orchestrator | Create admin user ------------------------------------------------------ 60.30s 2026-03-17 01:04:02.580085 | orchestrator | Restart ceph manager service ------------------------------------------- 23.73s 2026-03-17 01:04:02.580089 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.59s 2026-03-17 01:04:02.580094 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.12s 2026-03-17 01:04:02.580099 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.05s 2026-03-17 01:04:02.580105 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.04s 2026-03-17 01:04:02.580117 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.95s 2026-03-17 01:04:02.580126 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.92s 2026-03-17 01:04:02.580133 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.92s 2026-03-17 01:04:02.580141 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.90s 2026-03-17 01:04:02.580149 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.14s 2026-03-17 01:04:02.580238 | orchestrator | 2026-03-17 01:04:02 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:04:02.580247 | orchestrator | 2026-03-17 01:04:02 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:04:02.580257 | orchestrator | 2026-03-17 01:04:02 | INFO  | Task b0f6dea4-ab19-4194-824f-a084702bbba2 is in state STARTED 2026-03-17 01:04:02.581143 | orchestrator | 2026-03-17 01:04:02 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:04:02.581239 | orchestrator | 2026-03-17 01:04:02 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:05.603462 | orchestrator | 2026-03-17 01:04:05 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:04:05.603852 | orchestrator | 2026-03-17 01:04:05 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:04:05.604410 | orchestrator | 2026-03-17 01:04:05 | INFO  | Task b0f6dea4-ab19-4194-824f-a084702bbba2 is in state STARTED 2026-03-17 01:04:05.605014 | orchestrator | 2026-03-17 01:04:05 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:04:05.605088 | orchestrator | 2026-03-17 01:04:05 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:08.627772 | orchestrator | 2026-03-17 01:04:08 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:04:08.630474 | orchestrator | 2026-03-17 01:04:08 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:04:08.630591 | orchestrator | 2026-03-17 01:04:08 | INFO  | Task b0f6dea4-ab19-4194-824f-a084702bbba2 is in state STARTED 2026-03-17 01:04:08.630606 | orchestrator | 2026-03-17 01:04:08 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:04:08.630758 | orchestrator | 2026-03-17 01:04:08 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:11.653942 | orchestrator | 2026-03-17 01:04:11 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:04:11.654389 | orchestrator | 2026-03-17 01:04:11 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state STARTED 2026-03-17 01:04:11.654894 | orchestrator | 2026-03-17 01:04:11 | INFO  | Task b0f6dea4-ab19-4194-824f-a084702bbba2 is in state STARTED 2026-03-17 01:04:11.655775 | orchestrator | 2026-03-17 01:04:11 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:04:11.655799 | orchestrator | 2026-03-17 01:04:11 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:14.674677 | orchestrator | 2026-03-17 01:04:14 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:04:14.675580 | orchestrator | 2026-03-17 01:04:14 | INFO  | Task b12a1eed-3114-4602-8845-89baaaa3f206 is in state SUCCESS 2026-03-17 01:04:14.675606 | orchestrator | 2026-03-17 01:04:14.676815 | orchestrator | 2026-03-17 01:04:14.676852 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:04:14.676862 | orchestrator | 2026-03-17 01:04:14.676868 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:04:14.676875 | orchestrator | Tuesday 17 March 2026 01:02:07 +0000 (0:00:00.226) 0:00:00.226 ********* 2026-03-17 01:04:14.676881 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:04:14.676888 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:04:14.676895 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:04:14.676901 | orchestrator | 2026-03-17 01:04:14.676906 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:04:14.676913 | orchestrator | Tuesday 17 March 2026 01:02:07 +0000 (0:00:00.274) 0:00:00.501 ********* 2026-03-17 01:04:14.676920 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-17 01:04:14.676926 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-17 01:04:14.676933 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-17 01:04:14.676939 | orchestrator | 2026-03-17 01:04:14.676945 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-17 01:04:14.676952 | orchestrator | 2026-03-17 01:04:14.676958 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-17 01:04:14.676965 | orchestrator | Tuesday 17 March 2026 01:02:08 +0000 (0:00:00.397) 0:00:00.898 ********* 2026-03-17 01:04:14.676972 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:04:14.677279 | orchestrator | 2026-03-17 01:04:14.677295 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-03-17 01:04:14.677311 | orchestrator | Tuesday 17 March 2026 01:02:08 +0000 (0:00:00.538) 0:00:01.437 ********* 2026-03-17 01:04:14.677318 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-17 01:04:14.677324 | orchestrator | 2026-03-17 01:04:14.677331 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-03-17 01:04:14.677337 | orchestrator | Tuesday 17 March 2026 01:02:12 +0000 (0:00:04.106) 0:00:05.543 ********* 2026-03-17 01:04:14.677344 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-17 01:04:14.677350 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-17 01:04:14.677357 | orchestrator | 2026-03-17 01:04:14.677362 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-17 01:04:14.677368 | orchestrator | Tuesday 17 March 2026 01:02:20 +0000 (0:00:07.634) 0:00:13.177 ********* 2026-03-17 01:04:14.677374 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-17 01:04:14.677380 | orchestrator | 2026-03-17 01:04:14.677394 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-17 01:04:14.677401 | orchestrator | Tuesday 17 March 2026 01:02:24 +0000 (0:00:03.703) 0:00:16.880 ********* 2026-03-17 01:04:14.677407 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-17 01:04:14.677414 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-17 01:04:14.677420 | orchestrator | 2026-03-17 01:04:14.677425 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-17 01:04:14.677445 | orchestrator | Tuesday 17 March 2026 01:02:28 +0000 (0:00:04.447) 0:00:21.328 ********* 2026-03-17 01:04:14.677451 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-17 01:04:14.677456 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-17 01:04:14.677462 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-17 01:04:14.677468 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-17 01:04:14.677474 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-17 01:04:14.677480 | orchestrator | 2026-03-17 01:04:14.677486 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-03-17 01:04:14.677492 | orchestrator | Tuesday 17 March 2026 01:02:45 +0000 (0:00:17.427) 0:00:38.755 ********* 2026-03-17 01:04:14.677498 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-17 01:04:14.677504 | orchestrator | 2026-03-17 01:04:14.677511 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-17 01:04:14.677517 | orchestrator | Tuesday 17 March 2026 01:02:50 +0000 (0:00:04.225) 0:00:42.981 ********* 2026-03-17 01:04:14.677526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:14.677545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:14.677556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:14.677569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:14.677576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:14.677583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:14.677596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:14.677604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:14.677614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:14.677655 | orchestrator | 2026-03-17 01:04:14.677663 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-17 01:04:14.677674 | orchestrator | Tuesday 17 March 2026 01:02:52 +0000 (0:00:02.193) 0:00:45.175 ********* 2026-03-17 01:04:14.677680 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-17 01:04:14.677686 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-17 01:04:14.677693 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-17 01:04:14.677699 | orchestrator | 2026-03-17 01:04:14.677768 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-17 01:04:14.677776 | orchestrator | Tuesday 17 March 2026 01:02:53 +0000 (0:00:01.228) 0:00:46.403 ********* 2026-03-17 01:04:14.677783 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:04:14.677790 | orchestrator | 2026-03-17 01:04:14.677797 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-17 01:04:14.677803 | orchestrator | Tuesday 17 March 2026 01:02:53 +0000 (0:00:00.110) 0:00:46.514 ********* 2026-03-17 01:04:14.677810 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:04:14.677816 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:04:14.677823 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:04:14.677829 | orchestrator | 2026-03-17 01:04:14.677835 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-17 01:04:14.677842 | orchestrator | Tuesday 17 March 2026 01:02:54 +0000 (0:00:00.406) 0:00:46.921 ********* 2026-03-17 01:04:14.677848 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:04:14.677854 | orchestrator | 2026-03-17 01:04:14.677892 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-17 01:04:14.677899 | orchestrator | Tuesday 17 March 2026 01:02:54 +0000 (0:00:00.462) 0:00:47.384 ********* 2026-03-17 01:04:14.677906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:14.677920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:14.677931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:14.677944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:14.677951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:14.677958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:14.677984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:14.677992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:14.678001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:14.678041 | orchestrator | 2026-03-17 01:04:14.678051 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-17 01:04:14.678073 | orchestrator | Tuesday 17 March 2026 01:02:58 +0000 (0:00:03.575) 0:00:50.959 ********* 2026-03-17 01:04:14.678082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-17 01:04:14.678089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:04:14.678096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:04:14.678103 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:04:14.678117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-17 01:04:14.678129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:04:14.678140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:04:14.678147 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:04:14.678154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-17 01:04:14.678161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:04:14.678167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:04:14.678174 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:04:14.678180 | orchestrator | 2026-03-17 01:04:14.678189 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-17 01:04:14.678196 | orchestrator | Tuesday 17 March 2026 01:02:58 +0000 (0:00:00.853) 0:00:51.813 ********* 2026-03-17 01:04:14.678208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-17 01:04:14.678221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:04:14.678229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:04:14.678236 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:04:14.678242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-17 01:04:14.678253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:04:14.678270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:04:14.678284 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:04:14.678294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-17 01:04:14.678301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:04:14.678307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:04:14.678314 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:04:14.678320 | orchestrator | 2026-03-17 01:04:14.678326 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-17 01:04:14.678332 | orchestrator | Tuesday 17 March 2026 01:03:00 +0000 (0:00:01.356) 0:00:53.170 ********* 2026-03-17 01:04:14.678339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:14.678353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:14.678362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:14.678369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:14.678375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:14.678382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:14.678396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:14.678403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:14.678412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:14.678419 | orchestrator | 2026-03-17 01:04:14.678425 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-17 01:04:14.678431 | orchestrator | Tuesday 17 March 2026 01:03:04 +0000 (0:00:03.920) 0:00:57.090 ********* 2026-03-17 01:04:14.678437 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:04:14.678443 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:04:14.678450 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:04:14.678456 | orchestrator | 2026-03-17 01:04:14.678462 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-17 01:04:14.678468 | orchestrator | Tuesday 17 March 2026 01:03:06 +0000 (0:00:02.619) 0:00:59.709 ********* 2026-03-17 01:04:14.678474 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 01:04:14.678481 | orchestrator | 2026-03-17 01:04:14.678487 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-17 01:04:14.678493 | orchestrator | Tuesday 17 March 2026 01:03:07 +0000 (0:00:00.754) 0:01:00.464 ********* 2026-03-17 01:04:14.678499 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:04:14.678505 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:04:14.678511 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:04:14.678517 | orchestrator | 2026-03-17 01:04:14.678524 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-17 01:04:14.678530 | orchestrator | Tuesday 17 March 2026 01:03:08 +0000 (0:00:01.022) 0:01:01.487 ********* 2026-03-17 01:04:14.678537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:14.678554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:14.678562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:14.678571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:14.678580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:14.678587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:14.678598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:14.678609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:14.678616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:14.678640 | orchestrator | 2026-03-17 01:04:14.678647 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-17 01:04:14.678653 | orchestrator | Tuesday 17 March 2026 01:03:17 +0000 (0:00:08.593) 0:01:10.081 ********* 2026-03-17 01:04:14.678664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-17 01:04:14.678671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:04:14.678679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:04:14.678690 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:04:14.678702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-17 01:04:14.678709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:04:14.678718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:04:14.678725 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:04:14.678733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-17 01:04:14.678740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:04:14.678751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:04:14.678758 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:04:14.678764 | orchestrator | 2026-03-17 01:04:14.678770 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-03-17 01:04:14.678777 | orchestrator | Tuesday 17 March 2026 01:03:18 +0000 (0:00:01.379) 0:01:11.460 ********* 2026-03-17 01:04:14.678788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:14.678799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:14.678806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:14.678821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:14.678828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:14.678839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:14.678847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:14.678857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:14.678865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:14.678875 | orchestrator | 2026-03-17 01:04:14.678882 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-17 01:04:14.678889 | orchestrator | Tuesday 17 March 2026 01:03:22 +0000 (0:00:03.965) 0:01:15.425 ********* 2026-03-17 01:04:14.678896 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:04:14.678903 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:04:14.678909 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:04:14.678915 | orchestrator | 2026-03-17 01:04:14.678922 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-17 01:04:14.678928 | orchestrator | Tuesday 17 March 2026 01:03:23 +0000 (0:00:00.486) 0:01:15.911 ********* 2026-03-17 01:04:14.678935 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:04:14.678955 | orchestrator | 2026-03-17 01:04:14.678961 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-17 01:04:14.678967 | orchestrator | Tuesday 17 March 2026 01:03:25 +0000 (0:00:02.476) 0:01:18.388 ********* 2026-03-17 01:04:14.678974 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:04:14.678981 | orchestrator | 2026-03-17 01:04:14.678988 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-17 01:04:14.678995 | orchestrator | Tuesday 17 March 2026 01:03:28 +0000 (0:00:02.525) 0:01:20.913 ********* 2026-03-17 01:04:14.679001 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:04:14.679008 | orchestrator | 2026-03-17 01:04:14.679016 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-17 01:04:14.679022 | orchestrator | Tuesday 17 March 2026 01:03:41 +0000 (0:00:13.037) 0:01:33.951 ********* 2026-03-17 01:04:14.679029 | orchestrator | 2026-03-17 01:04:14.679036 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-17 01:04:14.679043 | orchestrator | Tuesday 17 March 2026 01:03:41 +0000 (0:00:00.121) 0:01:34.072 ********* 2026-03-17 01:04:14.679050 | orchestrator | 2026-03-17 01:04:14.679057 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-17 01:04:14.679064 | orchestrator | Tuesday 17 March 2026 01:03:41 +0000 (0:00:00.139) 0:01:34.212 ********* 2026-03-17 01:04:14.679070 | orchestrator | 2026-03-17 01:04:14.679077 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-17 01:04:14.679084 | orchestrator | Tuesday 17 March 2026 01:03:41 +0000 (0:00:00.215) 0:01:34.427 ********* 2026-03-17 01:04:14.679091 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:04:14.679096 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:04:14.679101 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:04:14.679108 | orchestrator | 2026-03-17 01:04:14.679115 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-17 01:04:14.679122 | orchestrator | Tuesday 17 March 2026 01:03:53 +0000 (0:00:12.082) 0:01:46.510 ********* 2026-03-17 01:04:14.679129 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:04:14.679136 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:04:14.679148 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:04:14.679156 | orchestrator | 2026-03-17 01:04:14.679164 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-17 01:04:14.679170 | orchestrator | Tuesday 17 March 2026 01:04:03 +0000 (0:00:10.221) 0:01:56.731 ********* 2026-03-17 01:04:14.679177 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:04:14.679184 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:04:14.679191 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:04:14.679198 | orchestrator | 2026-03-17 01:04:14.679205 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:04:14.679213 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 01:04:14.679227 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-17 01:04:14.679234 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-17 01:04:14.679240 | orchestrator | 2026-03-17 01:04:14.679247 | orchestrator | 2026-03-17 01:04:14.679254 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:04:14.679262 | orchestrator | Tuesday 17 March 2026 01:04:12 +0000 (0:00:08.668) 0:02:05.399 ********* 2026-03-17 01:04:14.679269 | orchestrator | =============================================================================== 2026-03-17 01:04:14.679280 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 17.43s 2026-03-17 01:04:14.679287 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 13.04s 2026-03-17 01:04:14.679295 | orchestrator | barbican : Restart barbican-api container ------------------------------ 12.08s 2026-03-17 01:04:14.679301 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.22s 2026-03-17 01:04:14.679308 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 8.67s 2026-03-17 01:04:14.679315 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 8.59s 2026-03-17 01:04:14.679322 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.63s 2026-03-17 01:04:14.679328 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.45s 2026-03-17 01:04:14.679335 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.23s 2026-03-17 01:04:14.679341 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 4.11s 2026-03-17 01:04:14.679347 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.97s 2026-03-17 01:04:14.679352 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.92s 2026-03-17 01:04:14.679358 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.70s 2026-03-17 01:04:14.679364 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.58s 2026-03-17 01:04:14.679370 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.62s 2026-03-17 01:04:14.679377 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.53s 2026-03-17 01:04:14.679384 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.48s 2026-03-17 01:04:14.679390 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.19s 2026-03-17 01:04:14.679397 | orchestrator | barbican : Copying over existing policy file ---------------------------- 1.38s 2026-03-17 01:04:14.679405 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.36s 2026-03-17 01:04:14.679412 | orchestrator | 2026-03-17 01:04:14 | INFO  | Task b0f6dea4-ab19-4194-824f-a084702bbba2 is in state STARTED 2026-03-17 01:04:14.679419 | orchestrator | 2026-03-17 01:04:14 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:04:14.679426 | orchestrator | 2026-03-17 01:04:14 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:04:14.679433 | orchestrator | 2026-03-17 01:04:14 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:17.704187 | orchestrator | 2026-03-17 01:04:17 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:04:17.706201 | orchestrator | 2026-03-17 01:04:17 | INFO  | Task b0f6dea4-ab19-4194-824f-a084702bbba2 is in state STARTED 2026-03-17 01:04:17.708975 | orchestrator | 2026-03-17 01:04:17 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:04:17.711613 | orchestrator | 2026-03-17 01:04:17 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:04:17.712259 | orchestrator | 2026-03-17 01:04:17 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:20.747230 | orchestrator | 2026-03-17 01:04:20 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:04:20.747289 | orchestrator | 2026-03-17 01:04:20 | INFO  | Task b0f6dea4-ab19-4194-824f-a084702bbba2 is in state STARTED 2026-03-17 01:04:20.747600 | orchestrator | 2026-03-17 01:04:20 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:04:20.748337 | orchestrator | 2026-03-17 01:04:20 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:04:20.748371 | orchestrator | 2026-03-17 01:04:20 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:23.781865 | orchestrator | 2026-03-17 01:04:23 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:04:23.784595 | orchestrator | 2026-03-17 01:04:23 | INFO  | Task b0f6dea4-ab19-4194-824f-a084702bbba2 is in state STARTED 2026-03-17 01:04:23.786874 | orchestrator | 2026-03-17 01:04:23 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:04:23.789318 | orchestrator | 2026-03-17 01:04:23 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:04:23.789717 | orchestrator | 2026-03-17 01:04:23 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:26.823005 | orchestrator | 2026-03-17 01:04:26 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:04:26.825116 | orchestrator | 2026-03-17 01:04:26 | INFO  | Task b0f6dea4-ab19-4194-824f-a084702bbba2 is in state STARTED 2026-03-17 01:04:26.826848 | orchestrator | 2026-03-17 01:04:26 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:04:26.828156 | orchestrator | 2026-03-17 01:04:26 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:04:26.828223 | orchestrator | 2026-03-17 01:04:26 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:29.867240 | orchestrator | 2026-03-17 01:04:29 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:04:29.868799 | orchestrator | 2026-03-17 01:04:29 | INFO  | Task b0f6dea4-ab19-4194-824f-a084702bbba2 is in state STARTED 2026-03-17 01:04:29.870416 | orchestrator | 2026-03-17 01:04:29 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:04:29.872285 | orchestrator | 2026-03-17 01:04:29 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:04:29.872326 | orchestrator | 2026-03-17 01:04:29 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:32.910788 | orchestrator | 2026-03-17 01:04:32 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:04:32.912324 | orchestrator | 2026-03-17 01:04:32 | INFO  | Task b0f6dea4-ab19-4194-824f-a084702bbba2 is in state STARTED 2026-03-17 01:04:32.913673 | orchestrator | 2026-03-17 01:04:32 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:04:32.915058 | orchestrator | 2026-03-17 01:04:32 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:04:32.915154 | orchestrator | 2026-03-17 01:04:32 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:35.949330 | orchestrator | 2026-03-17 01:04:35 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:04:35.950799 | orchestrator | 2026-03-17 01:04:35 | INFO  | Task b0f6dea4-ab19-4194-824f-a084702bbba2 is in state STARTED 2026-03-17 01:04:35.952396 | orchestrator | 2026-03-17 01:04:35 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:04:35.953864 | orchestrator | 2026-03-17 01:04:35 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:04:35.954036 | orchestrator | 2026-03-17 01:04:35 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:38.992285 | orchestrator | 2026-03-17 01:04:38 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:04:38.995162 | orchestrator | 2026-03-17 01:04:38 | INFO  | Task b0f6dea4-ab19-4194-824f-a084702bbba2 is in state STARTED 2026-03-17 01:04:38.996549 | orchestrator | 2026-03-17 01:04:38 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:04:38.997937 | orchestrator | 2026-03-17 01:04:38 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:04:38.997974 | orchestrator | 2026-03-17 01:04:38 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:42.033735 | orchestrator | 2026-03-17 01:04:42 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:04:42.034645 | orchestrator | 2026-03-17 01:04:42 | INFO  | Task b0f6dea4-ab19-4194-824f-a084702bbba2 is in state SUCCESS 2026-03-17 01:04:42.036333 | orchestrator | 2026-03-17 01:04:42.036369 | orchestrator | 2026-03-17 01:04:42.036374 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:04:42.036379 | orchestrator | 2026-03-17 01:04:42.036383 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:04:42.036387 | orchestrator | Tuesday 17 March 2026 01:03:31 +0000 (0:00:00.503) 0:00:00.503 ********* 2026-03-17 01:04:42.036391 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:04:42.036396 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:04:42.036400 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:04:42.036404 | orchestrator | 2026-03-17 01:04:42.036408 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:04:42.036412 | orchestrator | Tuesday 17 March 2026 01:03:32 +0000 (0:00:00.445) 0:00:00.949 ********* 2026-03-17 01:04:42.036416 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-17 01:04:42.036420 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-17 01:04:42.036424 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-17 01:04:42.036428 | orchestrator | 2026-03-17 01:04:42.036431 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-17 01:04:42.036435 | orchestrator | 2026-03-17 01:04:42.036439 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-17 01:04:42.036443 | orchestrator | Tuesday 17 March 2026 01:03:32 +0000 (0:00:00.483) 0:00:01.432 ********* 2026-03-17 01:04:42.036447 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:04:42.036451 | orchestrator | 2026-03-17 01:04:42.036455 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-03-17 01:04:42.036459 | orchestrator | Tuesday 17 March 2026 01:03:33 +0000 (0:00:00.641) 0:00:02.074 ********* 2026-03-17 01:04:42.036463 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-17 01:04:42.036467 | orchestrator | 2026-03-17 01:04:42.036479 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-03-17 01:04:42.036484 | orchestrator | Tuesday 17 March 2026 01:03:38 +0000 (0:00:04.760) 0:00:06.834 ********* 2026-03-17 01:04:42.036487 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-17 01:04:42.036491 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-17 01:04:42.036495 | orchestrator | 2026-03-17 01:04:42.036499 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-17 01:04:42.036503 | orchestrator | Tuesday 17 March 2026 01:03:45 +0000 (0:00:06.837) 0:00:13.671 ********* 2026-03-17 01:04:42.036522 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-17 01:04:42.036529 | orchestrator | 2026-03-17 01:04:42.036536 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-17 01:04:42.036545 | orchestrator | Tuesday 17 March 2026 01:03:49 +0000 (0:00:04.028) 0:00:17.699 ********* 2026-03-17 01:04:42.036555 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-17 01:04:42.036560 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-17 01:04:42.036566 | orchestrator | 2026-03-17 01:04:42.036571 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-17 01:04:42.036577 | orchestrator | Tuesday 17 March 2026 01:03:52 +0000 (0:00:03.751) 0:00:21.451 ********* 2026-03-17 01:04:42.036583 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-17 01:04:42.036588 | orchestrator | 2026-03-17 01:04:42.036609 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-03-17 01:04:42.036615 | orchestrator | Tuesday 17 March 2026 01:03:56 +0000 (0:00:03.622) 0:00:25.073 ********* 2026-03-17 01:04:42.036620 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-17 01:04:42.036626 | orchestrator | 2026-03-17 01:04:42.036631 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-17 01:04:42.036668 | orchestrator | Tuesday 17 March 2026 01:04:00 +0000 (0:00:03.932) 0:00:29.006 ********* 2026-03-17 01:04:42.036675 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:04:42.036681 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:04:42.036687 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:04:42.036693 | orchestrator | 2026-03-17 01:04:42.036699 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-17 01:04:42.036705 | orchestrator | Tuesday 17 March 2026 01:04:00 +0000 (0:00:00.238) 0:00:29.245 ********* 2026-03-17 01:04:42.036713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:42.036733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:42.036745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:42.036759 | orchestrator | 2026-03-17 01:04:42.036766 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-17 01:04:42.036773 | orchestrator | Tuesday 17 March 2026 01:04:01 +0000 (0:00:01.001) 0:00:30.246 ********* 2026-03-17 01:04:42.036779 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:04:42.036793 | orchestrator | 2026-03-17 01:04:42.036806 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-17 01:04:42.036811 | orchestrator | Tuesday 17 March 2026 01:04:01 +0000 (0:00:00.237) 0:00:30.483 ********* 2026-03-17 01:04:42.036817 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:04:42.036823 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:04:42.036830 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:04:42.036838 | orchestrator | 2026-03-17 01:04:42.036845 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-17 01:04:42.036851 | orchestrator | Tuesday 17 March 2026 01:04:02 +0000 (0:00:00.955) 0:00:31.439 ********* 2026-03-17 01:04:42.036857 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:04:42.036864 | orchestrator | 2026-03-17 01:04:42.036870 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-17 01:04:42.036877 | orchestrator | Tuesday 17 March 2026 01:04:03 +0000 (0:00:00.628) 0:00:32.068 ********* 2026-03-17 01:04:42.036884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:42.036904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:42.036916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:42.036928 | orchestrator | 2026-03-17 01:04:42.036938 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-17 01:04:42.036945 | orchestrator | Tuesday 17 March 2026 01:04:05 +0000 (0:00:01.830) 0:00:33.898 ********* 2026-03-17 01:04:42.036952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-17 01:04:42.036959 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:04:42.036966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-17 01:04:42.036973 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:04:42.036985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-17 01:04:42.036996 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:04:42.037003 | orchestrator | 2026-03-17 01:04:42.037009 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-17 01:04:42.037016 | orchestrator | Tuesday 17 March 2026 01:04:06 +0000 (0:00:00.846) 0:00:34.745 ********* 2026-03-17 01:04:42.037023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-17 01:04:42.037030 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:04:42.037157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-17 01:04:42.037168 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:04:42.037177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-17 01:04:42.037184 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:04:42.037190 | orchestrator | 2026-03-17 01:04:42.037197 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-17 01:04:42.037203 | orchestrator | Tuesday 17 March 2026 01:04:06 +0000 (0:00:00.639) 0:00:35.384 ********* 2026-03-17 01:04:42.037216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:42.037229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:42.037241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:42.037248 | orchestrator | 2026-03-17 01:04:42.037256 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-17 01:04:42.037262 | orchestrator | Tuesday 17 March 2026 01:04:08 +0000 (0:00:01.691) 0:00:37.075 ********* 2026-03-17 01:04:42.037270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:42.037277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:42.037294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:42.037302 | orchestrator | 2026-03-17 01:04:42.037308 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-17 01:04:42.037314 | orchestrator | Tuesday 17 March 2026 01:04:12 +0000 (0:00:04.241) 0:00:41.317 ********* 2026-03-17 01:04:42.037319 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-17 01:04:42.037326 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-17 01:04:42.037335 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-17 01:04:42.037342 | orchestrator | 2026-03-17 01:04:42.037348 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-17 01:04:42.037354 | orchestrator | Tuesday 17 March 2026 01:04:14 +0000 (0:00:01.495) 0:00:42.813 ********* 2026-03-17 01:04:42.037360 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:04:42.037366 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:04:42.037372 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:04:42.037378 | orchestrator | 2026-03-17 01:04:42.037384 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-17 01:04:42.037391 | orchestrator | Tuesday 17 March 2026 01:04:15 +0000 (0:00:01.388) 0:00:44.201 ********* 2026-03-17 01:04:42.037397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-17 01:04:42.037404 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:04:42.037410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-17 01:04:42.037421 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:04:42.037433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-17 01:04:42.037439 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:04:42.037444 | orchestrator | 2026-03-17 01:04:42.037447 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-03-17 01:04:42.037451 | orchestrator | Tuesday 17 March 2026 01:04:16 +0000 (0:00:01.017) 0:00:45.219 ********* 2026-03-17 01:04:42.037458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:42.037462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:42.037467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-17 01:04:42.037480 | orchestrator | 2026-03-17 01:04:42.037486 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-17 01:04:42.037493 | orchestrator | Tuesday 17 March 2026 01:04:18 +0000 (0:00:01.502) 0:00:46.722 ********* 2026-03-17 01:04:42.037498 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:04:42.037504 | orchestrator | 2026-03-17 01:04:42.037510 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-17 01:04:42.037516 | orchestrator | Tuesday 17 March 2026 01:04:20 +0000 (0:00:02.612) 0:00:49.334 ********* 2026-03-17 01:04:42.037523 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:04:42.037529 | orchestrator | 2026-03-17 01:04:42.037535 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-17 01:04:42.037541 | orchestrator | Tuesday 17 March 2026 01:04:23 +0000 (0:00:02.839) 0:00:52.173 ********* 2026-03-17 01:04:42.037551 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:04:42.037557 | orchestrator | 2026-03-17 01:04:42.037564 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-17 01:04:42.037571 | orchestrator | Tuesday 17 March 2026 01:04:36 +0000 (0:00:12.965) 0:01:05.138 ********* 2026-03-17 01:04:42.037578 | orchestrator | 2026-03-17 01:04:42.037586 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-17 01:04:42.037608 | orchestrator | Tuesday 17 March 2026 01:04:36 +0000 (0:00:00.060) 0:01:05.198 ********* 2026-03-17 01:04:42.037616 | orchestrator | 2026-03-17 01:04:42.037622 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-17 01:04:42.037628 | orchestrator | Tuesday 17 March 2026 01:04:36 +0000 (0:00:00.074) 0:01:05.273 ********* 2026-03-17 01:04:42.037634 | orchestrator | 2026-03-17 01:04:42.037640 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-17 01:04:42.037648 | orchestrator | Tuesday 17 March 2026 01:04:36 +0000 (0:00:00.060) 0:01:05.334 ********* 2026-03-17 01:04:42.037657 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:04:42.037664 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:04:42.037670 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:04:42.037677 | orchestrator | 2026-03-17 01:04:42.037684 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:04:42.037693 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-17 01:04:42.037700 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-17 01:04:42.037708 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-17 01:04:42.037714 | orchestrator | 2026-03-17 01:04:42.037721 | orchestrator | 2026-03-17 01:04:42.037731 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:04:42.037737 | orchestrator | Tuesday 17 March 2026 01:04:41 +0000 (0:00:04.518) 0:01:09.852 ********* 2026-03-17 01:04:42.037744 | orchestrator | =============================================================================== 2026-03-17 01:04:42.037750 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.97s 2026-03-17 01:04:42.037756 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.84s 2026-03-17 01:04:42.037770 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.76s 2026-03-17 01:04:42.037777 | orchestrator | placement : Restart placement-api container ----------------------------- 4.52s 2026-03-17 01:04:42.037784 | orchestrator | placement : Copying over placement.conf --------------------------------- 4.24s 2026-03-17 01:04:42.037791 | orchestrator | service-ks-register : placement | Creating projects --------------------- 4.03s 2026-03-17 01:04:42.037797 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.93s 2026-03-17 01:04:42.037804 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.75s 2026-03-17 01:04:42.037811 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.62s 2026-03-17 01:04:42.037818 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.84s 2026-03-17 01:04:42.037825 | orchestrator | placement : Creating placement databases -------------------------------- 2.61s 2026-03-17 01:04:42.037832 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.83s 2026-03-17 01:04:42.037839 | orchestrator | placement : Copying over config.json files for services ----------------- 1.69s 2026-03-17 01:04:42.037846 | orchestrator | placement : Check placement containers ---------------------------------- 1.50s 2026-03-17 01:04:42.037853 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.50s 2026-03-17 01:04:42.037859 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.39s 2026-03-17 01:04:42.037865 | orchestrator | placement : Copying over existing policy file --------------------------- 1.02s 2026-03-17 01:04:42.037869 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.00s 2026-03-17 01:04:42.037876 | orchestrator | placement : Set placement policy file ----------------------------------- 0.96s 2026-03-17 01:04:42.037881 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.85s 2026-03-17 01:04:42.037892 | orchestrator | 2026-03-17 01:04:42 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:04:42.037899 | orchestrator | 2026-03-17 01:04:42 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:04:42.037906 | orchestrator | 2026-03-17 01:04:42 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:45.090527 | orchestrator | 2026-03-17 01:04:45 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:04:45.092751 | orchestrator | 2026-03-17 01:04:45 | INFO  | Task 966fc0a3-2fda-4369-ae12-a2ea8b795ad3 is in state STARTED 2026-03-17 01:04:45.094443 | orchestrator | 2026-03-17 01:04:45 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:04:45.096396 | orchestrator | 2026-03-17 01:04:45 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:04:45.096449 | orchestrator | 2026-03-17 01:04:45 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:48.129575 | orchestrator | 2026-03-17 01:04:48 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:04:48.130706 | orchestrator | 2026-03-17 01:04:48 | INFO  | Task 966fc0a3-2fda-4369-ae12-a2ea8b795ad3 is in state SUCCESS 2026-03-17 01:04:48.132510 | orchestrator | 2026-03-17 01:04:48 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:04:48.134560 | orchestrator | 2026-03-17 01:04:48 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:04:48.136000 | orchestrator | 2026-03-17 01:04:48 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:04:48.136159 | orchestrator | 2026-03-17 01:04:48 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:51.175419 | orchestrator | 2026-03-17 01:04:51 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:04:51.177947 | orchestrator | 2026-03-17 01:04:51 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:04:51.179793 | orchestrator | 2026-03-17 01:04:51 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:04:51.181571 | orchestrator | 2026-03-17 01:04:51 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:04:51.181635 | orchestrator | 2026-03-17 01:04:51 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:54.221357 | orchestrator | 2026-03-17 01:04:54 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state STARTED 2026-03-17 01:04:54.221599 | orchestrator | 2026-03-17 01:04:54 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:04:54.222378 | orchestrator | 2026-03-17 01:04:54 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:04:54.223957 | orchestrator | 2026-03-17 01:04:54 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:04:54.223990 | orchestrator | 2026-03-17 01:04:54 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:57.255797 | orchestrator | 2026-03-17 01:04:57 | INFO  | Task c1fa1d48-f300-4a17-84f0-6d07ab98d1ed is in state SUCCESS 2026-03-17 01:04:57.257090 | orchestrator | 2026-03-17 01:04:57.257130 | orchestrator | 2026-03-17 01:04:57.257136 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:04:57.257143 | orchestrator | 2026-03-17 01:04:57.257148 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:04:57.257154 | orchestrator | Tuesday 17 March 2026 01:04:45 +0000 (0:00:00.160) 0:00:00.160 ********* 2026-03-17 01:04:57.257159 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:04:57.257165 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:04:57.257171 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:04:57.257177 | orchestrator | 2026-03-17 01:04:57.257183 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:04:57.257190 | orchestrator | Tuesday 17 March 2026 01:04:45 +0000 (0:00:00.271) 0:00:00.432 ********* 2026-03-17 01:04:57.257196 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-17 01:04:57.257202 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-17 01:04:57.257209 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-17 01:04:57.257216 | orchestrator | 2026-03-17 01:04:57.257221 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-03-17 01:04:57.257226 | orchestrator | 2026-03-17 01:04:57.257232 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-03-17 01:04:57.257237 | orchestrator | Tuesday 17 March 2026 01:04:45 +0000 (0:00:00.524) 0:00:00.956 ********* 2026-03-17 01:04:57.257446 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:04:57.257461 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:04:57.257467 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:04:57.257472 | orchestrator | 2026-03-17 01:04:57.257478 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:04:57.257484 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:04:57.257490 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:04:57.257496 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:04:57.257501 | orchestrator | 2026-03-17 01:04:57.257507 | orchestrator | 2026-03-17 01:04:57.257512 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:04:57.257518 | orchestrator | Tuesday 17 March 2026 01:04:46 +0000 (0:00:00.652) 0:00:01.609 ********* 2026-03-17 01:04:57.257536 | orchestrator | =============================================================================== 2026-03-17 01:04:57.257542 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.65s 2026-03-17 01:04:57.257548 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.52s 2026-03-17 01:04:57.257553 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2026-03-17 01:04:57.257559 | orchestrator | 2026-03-17 01:04:57.257564 | orchestrator | 2026-03-17 01:04:57.257569 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:04:57.257593 | orchestrator | 2026-03-17 01:04:57.257600 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:04:57.257605 | orchestrator | Tuesday 17 March 2026 01:02:07 +0000 (0:00:00.198) 0:00:00.198 ********* 2026-03-17 01:04:57.257611 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:04:57.257616 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:04:57.257621 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:04:57.257626 | orchestrator | 2026-03-17 01:04:57.257631 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:04:57.257637 | orchestrator | Tuesday 17 March 2026 01:02:07 +0000 (0:00:00.291) 0:00:00.490 ********* 2026-03-17 01:04:57.257642 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-17 01:04:57.257648 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-17 01:04:57.257653 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-17 01:04:57.257659 | orchestrator | 2026-03-17 01:04:57.257664 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-17 01:04:57.257670 | orchestrator | 2026-03-17 01:04:57.257675 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-17 01:04:57.257681 | orchestrator | Tuesday 17 March 2026 01:02:08 +0000 (0:00:00.374) 0:00:00.864 ********* 2026-03-17 01:04:57.257687 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:04:57.257692 | orchestrator | 2026-03-17 01:04:57.257720 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-03-17 01:04:57.257726 | orchestrator | Tuesday 17 March 2026 01:02:08 +0000 (0:00:00.424) 0:00:01.288 ********* 2026-03-17 01:04:57.257732 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-17 01:04:57.257737 | orchestrator | 2026-03-17 01:04:57.257830 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-03-17 01:04:57.257839 | orchestrator | Tuesday 17 March 2026 01:02:13 +0000 (0:00:04.701) 0:00:05.990 ********* 2026-03-17 01:04:57.257852 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-17 01:04:57.257858 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-17 01:04:57.257863 | orchestrator | 2026-03-17 01:04:57.257869 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-17 01:04:57.257874 | orchestrator | Tuesday 17 March 2026 01:02:21 +0000 (0:00:07.846) 0:00:13.836 ********* 2026-03-17 01:04:57.257880 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-17 01:04:57.257886 | orchestrator | 2026-03-17 01:04:57.257891 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-17 01:04:57.257896 | orchestrator | Tuesday 17 March 2026 01:02:24 +0000 (0:00:03.977) 0:00:17.814 ********* 2026-03-17 01:04:57.257912 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-17 01:04:57.257917 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-17 01:04:57.257923 | orchestrator | 2026-03-17 01:04:57.257928 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-17 01:04:57.257934 | orchestrator | Tuesday 17 March 2026 01:02:29 +0000 (0:00:04.384) 0:00:22.198 ********* 2026-03-17 01:04:57.257939 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-17 01:04:57.257952 | orchestrator | 2026-03-17 01:04:57.257958 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-03-17 01:04:57.257963 | orchestrator | Tuesday 17 March 2026 01:02:32 +0000 (0:00:03.423) 0:00:25.621 ********* 2026-03-17 01:04:57.258547 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-17 01:04:57.258688 | orchestrator | 2026-03-17 01:04:57.258698 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-17 01:04:57.258704 | orchestrator | Tuesday 17 March 2026 01:02:37 +0000 (0:00:04.338) 0:00:29.960 ********* 2026-03-17 01:04:57.258713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 01:04:57.258721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 01:04:57.258727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 01:04:57.258737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:04:57.258800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:04:57.258816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:04:57.258824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.258832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.258839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.258848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.258873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.258885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.258892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.258899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.258905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.258911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.258918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.258945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.258952 | orchestrator | 2026-03-17 01:04:57.258960 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-17 01:04:57.258966 | orchestrator | Tuesday 17 March 2026 01:02:39 +0000 (0:00:02.666) 0:00:32.627 ********* 2026-03-17 01:04:57.258973 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:04:57.258980 | orchestrator | 2026-03-17 01:04:57.258987 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-17 01:04:57.258993 | orchestrator | Tuesday 17 March 2026 01:02:39 +0000 (0:00:00.118) 0:00:32.746 ********* 2026-03-17 01:04:57.258999 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:04:57.259007 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:04:57.259013 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:04:57.259018 | orchestrator | 2026-03-17 01:04:57.259025 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-17 01:04:57.259031 | orchestrator | Tuesday 17 March 2026 01:02:40 +0000 (0:00:00.264) 0:00:33.010 ********* 2026-03-17 01:04:57.259038 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:04:57.259044 | orchestrator | 2026-03-17 01:04:57.259051 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-17 01:04:57.259057 | orchestrator | Tuesday 17 March 2026 01:02:40 +0000 (0:00:00.590) 0:00:33.601 ********* 2026-03-17 01:04:57.259113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 01:04:57.259127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 01:04:57.259137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 01:04:57.259171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:04:57.259181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:04:57.259188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:04:57.259194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.259201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.259214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.259235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.259242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.259248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.259254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.259261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.259268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.259281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.259304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.259311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.259318 | orchestrator | 2026-03-17 01:04:57.259325 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-17 01:04:57.259331 | orchestrator | Tuesday 17 March 2026 01:02:47 +0000 (0:00:06.529) 0:00:40.130 ********* 2026-03-17 01:04:57.259338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 01:04:57.259345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:04:57.259356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.259365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.259388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.259397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.259403 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:04:57.259409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 01:04:57.259415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:04:57.259425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.259433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.259454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.259461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.259468 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:04:57.259475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 01:04:57.259482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:04:57.259494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.259502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.259524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.259532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.259539 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:04:57.259546 | orchestrator | 2026-03-17 01:04:57.259552 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-17 01:04:57.259559 | orchestrator | Tuesday 17 March 2026 01:02:48 +0000 (0:00:00.957) 0:00:41.087 ********* 2026-03-17 01:04:57.259571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 01:04:57.259596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:04:57.259602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.259613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.259635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.259642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.259648 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:04:57.259655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 01:04:57.259665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:04:57.259672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.259682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.259703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.259710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.259716 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:04:57.259722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 01:04:57.259731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:04:57.259736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.259745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.259768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.259775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.259782 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:04:57.259789 | orchestrator | 2026-03-17 01:04:57.259795 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-17 01:04:57.259802 | orchestrator | Tuesday 17 March 2026 01:02:50 +0000 (0:00:02.587) 0:00:43.675 ********* 2026-03-17 01:04:57.259809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 01:04:57.259819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 01:04:57.259829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 01:04:57.259851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:04:57.259858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:04:57.259865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:04:57.259875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.259881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.259888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.259896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.259920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.259926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.259937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.259943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.259951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.259959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.259981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.259988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.259999 | orchestrator | 2026-03-17 01:04:57.260006 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-17 01:04:57.260012 | orchestrator | Tuesday 17 March 2026 01:02:57 +0000 (0:00:06.163) 0:00:49.838 ********* 2026-03-17 01:04:57.260018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 01:04:57.260026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 01:04:57.260032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 01:04:57.260054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260228 | orchestrator | 2026-03-17 01:04:57.260237 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-17 01:04:57.260243 | orchestrator | Tuesday 17 March 2026 01:03:14 +0000 (0:00:17.276) 0:01:07.115 ********* 2026-03-17 01:04:57.260249 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-17 01:04:57.260256 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-17 01:04:57.260263 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-17 01:04:57.260269 | orchestrator | 2026-03-17 01:04:57.260277 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-17 01:04:57.260284 | orchestrator | Tuesday 17 March 2026 01:03:21 +0000 (0:00:07.167) 0:01:14.283 ********* 2026-03-17 01:04:57.260291 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-17 01:04:57.260298 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-17 01:04:57.260305 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-17 01:04:57.260312 | orchestrator | 2026-03-17 01:04:57.260318 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-17 01:04:57.260323 | orchestrator | Tuesday 17 March 2026 01:03:24 +0000 (0:00:03.209) 0:01:17.492 ********* 2026-03-17 01:04:57.260330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 01:04:57.260336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 01:04:57.260349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 01:04:57.260359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.260371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.260377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.260384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.260405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.260411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.260416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.260429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.260437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.260449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260467 | orchestrator | 2026-03-17 01:04:57.260473 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-17 01:04:57.260478 | orchestrator | Tuesday 17 March 2026 01:03:28 +0000 (0:00:03.428) 0:01:20.921 ********* 2026-03-17 01:04:57.260484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 01:04:57.260490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 01:04:57.260502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 01:04:57.260512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.260524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.260530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.260536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.260558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.260565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.260571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.260601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.260610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.260619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260642 | orchestrator | 2026-03-17 01:04:57.260647 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-17 01:04:57.260653 | orchestrator | Tuesday 17 March 2026 01:03:31 +0000 (0:00:03.372) 0:01:24.293 ********* 2026-03-17 01:04:57.260659 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:04:57.260664 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:04:57.260670 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:04:57.260676 | orchestrator | 2026-03-17 01:04:57.260681 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-17 01:04:57.260687 | orchestrator | Tuesday 17 March 2026 01:03:32 +0000 (0:00:00.721) 0:01:25.020 ********* 2026-03-17 01:04:57.260692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 01:04:57.260704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:04:57.260712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.260721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.260726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.260731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.260736 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:04:57.260742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 01:04:57.260751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:04:57.260759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.260768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.260774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.260780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.260786 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:04:57.260792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-17 01:04:57.260802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:04:57.260810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.260818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.260824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.260830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:04:57.260836 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:04:57.260841 | orchestrator | 2026-03-17 01:04:57.260847 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-03-17 01:04:57.260853 | orchestrator | Tuesday 17 March 2026 01:03:33 +0000 (0:00:00.996) 0:01:26.017 ********* 2026-03-17 01:04:57.260859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 01:04:57.260868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 01:04:57.260879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-17 01:04:57.260886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.260970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.261043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.261053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:04:57.261063 | orchestrator | 2026-03-17 01:04:57.261068 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-17 01:04:57.261102 | orchestrator | Tuesday 17 March 2026 01:03:38 +0000 (0:00:05.636) 0:01:31.653 ********* 2026-03-17 01:04:57.261109 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:04:57.261115 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:04:57.261120 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:04:57.261126 | orchestrator | 2026-03-17 01:04:57.261131 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-17 01:04:57.261137 | orchestrator | Tuesday 17 March 2026 01:03:39 +0000 (0:00:00.258) 0:01:31.912 ********* 2026-03-17 01:04:57.261142 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-17 01:04:57.261148 | orchestrator | 2026-03-17 01:04:57.261154 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-17 01:04:57.261159 | orchestrator | Tuesday 17 March 2026 01:03:41 +0000 (0:00:02.069) 0:01:33.982 ********* 2026-03-17 01:04:57.261165 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-17 01:04:57.261170 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-17 01:04:57.261176 | orchestrator | 2026-03-17 01:04:57.261181 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-17 01:04:57.261186 | orchestrator | Tuesday 17 March 2026 01:03:43 +0000 (0:00:02.372) 0:01:36.354 ********* 2026-03-17 01:04:57.261192 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:04:57.261198 | orchestrator | 2026-03-17 01:04:57.261203 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-17 01:04:57.261209 | orchestrator | Tuesday 17 March 2026 01:03:58 +0000 (0:00:15.446) 0:01:51.801 ********* 2026-03-17 01:04:57.261214 | orchestrator | 2026-03-17 01:04:57.261220 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-17 01:04:57.261225 | orchestrator | Tuesday 17 March 2026 01:03:59 +0000 (0:00:00.135) 0:01:51.936 ********* 2026-03-17 01:04:57.261231 | orchestrator | 2026-03-17 01:04:57.261236 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-17 01:04:57.261242 | orchestrator | Tuesday 17 March 2026 01:03:59 +0000 (0:00:00.145) 0:01:52.082 ********* 2026-03-17 01:04:57.261247 | orchestrator | 2026-03-17 01:04:57.261252 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-17 01:04:57.261257 | orchestrator | Tuesday 17 March 2026 01:03:59 +0000 (0:00:00.132) 0:01:52.214 ********* 2026-03-17 01:04:57.261263 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:04:57.261268 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:04:57.261274 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:04:57.261279 | orchestrator | 2026-03-17 01:04:57.261284 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-17 01:04:57.261290 | orchestrator | Tuesday 17 March 2026 01:04:08 +0000 (0:00:08.650) 0:02:00.865 ********* 2026-03-17 01:04:57.261295 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:04:57.261301 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:04:57.261306 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:04:57.261312 | orchestrator | 2026-03-17 01:04:57.261317 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-17 01:04:57.261323 | orchestrator | Tuesday 17 March 2026 01:04:19 +0000 (0:00:11.848) 0:02:12.714 ********* 2026-03-17 01:04:57.261329 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:04:57.261334 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:04:57.261340 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:04:57.261345 | orchestrator | 2026-03-17 01:04:57.261354 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-17 01:04:57.261360 | orchestrator | Tuesday 17 March 2026 01:04:24 +0000 (0:00:04.826) 0:02:17.540 ********* 2026-03-17 01:04:57.261365 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:04:57.261370 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:04:57.261376 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:04:57.261385 | orchestrator | 2026-03-17 01:04:57.261391 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-17 01:04:57.261395 | orchestrator | Tuesday 17 March 2026 01:04:34 +0000 (0:00:09.850) 0:02:27.391 ********* 2026-03-17 01:04:57.261404 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:04:57.261408 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:04:57.261413 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:04:57.261419 | orchestrator | 2026-03-17 01:04:57.261425 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-17 01:04:57.261434 | orchestrator | Tuesday 17 March 2026 01:04:39 +0000 (0:00:04.606) 0:02:31.998 ********* 2026-03-17 01:04:57.261440 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:04:57.261446 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:04:57.261451 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:04:57.261456 | orchestrator | 2026-03-17 01:04:57.261462 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-17 01:04:57.261467 | orchestrator | Tuesday 17 March 2026 01:04:49 +0000 (0:00:09.959) 0:02:41.957 ********* 2026-03-17 01:04:57.261472 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:04:57.261477 | orchestrator | 2026-03-17 01:04:57.261483 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:04:57.261489 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 01:04:57.261495 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-17 01:04:57.261500 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-17 01:04:57.261506 | orchestrator | 2026-03-17 01:04:57.261511 | orchestrator | 2026-03-17 01:04:57.261517 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:04:57.261526 | orchestrator | Tuesday 17 March 2026 01:04:56 +0000 (0:00:06.966) 0:02:48.924 ********* 2026-03-17 01:04:57.261532 | orchestrator | =============================================================================== 2026-03-17 01:04:57.261537 | orchestrator | designate : Copying over designate.conf -------------------------------- 17.28s 2026-03-17 01:04:57.261543 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.45s 2026-03-17 01:04:57.261548 | orchestrator | designate : Restart designate-api container ---------------------------- 11.85s 2026-03-17 01:04:57.261553 | orchestrator | designate : Restart designate-worker container -------------------------- 9.96s 2026-03-17 01:04:57.261558 | orchestrator | designate : Restart designate-producer container ------------------------ 9.85s 2026-03-17 01:04:57.261564 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 8.65s 2026-03-17 01:04:57.261571 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.85s 2026-03-17 01:04:57.261588 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 7.17s 2026-03-17 01:04:57.261593 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 6.97s 2026-03-17 01:04:57.261598 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.53s 2026-03-17 01:04:57.261603 | orchestrator | designate : Copying over config.json files for services ----------------- 6.16s 2026-03-17 01:04:57.261609 | orchestrator | designate : Check designate containers ---------------------------------- 5.64s 2026-03-17 01:04:57.261614 | orchestrator | designate : Restart designate-central container ------------------------- 4.83s 2026-03-17 01:04:57.261619 | orchestrator | service-ks-register : designate | Creating services --------------------- 4.70s 2026-03-17 01:04:57.261624 | orchestrator | designate : Restart designate-mdns container ---------------------------- 4.61s 2026-03-17 01:04:57.261632 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.38s 2026-03-17 01:04:57.261638 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.34s 2026-03-17 01:04:57.261651 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.98s 2026-03-17 01:04:57.261657 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.43s 2026-03-17 01:04:57.261662 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.42s 2026-03-17 01:04:57.261670 | orchestrator | 2026-03-17 01:04:57 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:04:57.261676 | orchestrator | 2026-03-17 01:04:57 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:04:57.261682 | orchestrator | 2026-03-17 01:04:57 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:04:57.261688 | orchestrator | 2026-03-17 01:04:57 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:00.293048 | orchestrator | 2026-03-17 01:05:00 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:05:00.293699 | orchestrator | 2026-03-17 01:05:00 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:05:00.294825 | orchestrator | 2026-03-17 01:05:00 | INFO  | Task 3c22cb80-149b-4fb4-acac-2ce29f028017 is in state STARTED 2026-03-17 01:05:00.294858 | orchestrator | 2026-03-17 01:05:00 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:05:00.295937 | orchestrator | 2026-03-17 01:05:00 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:03.351669 | orchestrator | 2026-03-17 01:05:03 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:05:03.352376 | orchestrator | 2026-03-17 01:05:03 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:05:03.353009 | orchestrator | 2026-03-17 01:05:03 | INFO  | Task 3c22cb80-149b-4fb4-acac-2ce29f028017 is in state STARTED 2026-03-17 01:05:03.353799 | orchestrator | 2026-03-17 01:05:03 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:05:03.353831 | orchestrator | 2026-03-17 01:05:03 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:06.386071 | orchestrator | 2026-03-17 01:05:06 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:05:06.386295 | orchestrator | 2026-03-17 01:05:06 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:05:06.386921 | orchestrator | 2026-03-17 01:05:06 | INFO  | Task 3c22cb80-149b-4fb4-acac-2ce29f028017 is in state STARTED 2026-03-17 01:05:06.387413 | orchestrator | 2026-03-17 01:05:06 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:05:06.387430 | orchestrator | 2026-03-17 01:05:06 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:09.412641 | orchestrator | 2026-03-17 01:05:09 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:05:09.413201 | orchestrator | 2026-03-17 01:05:09 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:05:09.413761 | orchestrator | 2026-03-17 01:05:09 | INFO  | Task 3c22cb80-149b-4fb4-acac-2ce29f028017 is in state STARTED 2026-03-17 01:05:09.415348 | orchestrator | 2026-03-17 01:05:09 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:05:09.415401 | orchestrator | 2026-03-17 01:05:09 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:12.441097 | orchestrator | 2026-03-17 01:05:12 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:05:12.441782 | orchestrator | 2026-03-17 01:05:12 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:05:12.442192 | orchestrator | 2026-03-17 01:05:12 | INFO  | Task 3c22cb80-149b-4fb4-acac-2ce29f028017 is in state STARTED 2026-03-17 01:05:12.442832 | orchestrator | 2026-03-17 01:05:12 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:05:12.442866 | orchestrator | 2026-03-17 01:05:12 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:15.468346 | orchestrator | 2026-03-17 01:05:15 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:05:15.470514 | orchestrator | 2026-03-17 01:05:15 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:05:15.470900 | orchestrator | 2026-03-17 01:05:15 | INFO  | Task 3c22cb80-149b-4fb4-acac-2ce29f028017 is in state STARTED 2026-03-17 01:05:15.471691 | orchestrator | 2026-03-17 01:05:15 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:05:15.471717 | orchestrator | 2026-03-17 01:05:15 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:18.499222 | orchestrator | 2026-03-17 01:05:18 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:05:18.499664 | orchestrator | 2026-03-17 01:05:18 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:05:18.500301 | orchestrator | 2026-03-17 01:05:18 | INFO  | Task 3c22cb80-149b-4fb4-acac-2ce29f028017 is in state STARTED 2026-03-17 01:05:18.501079 | orchestrator | 2026-03-17 01:05:18 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:05:18.501106 | orchestrator | 2026-03-17 01:05:18 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:21.534460 | orchestrator | 2026-03-17 01:05:21 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:05:21.534875 | orchestrator | 2026-03-17 01:05:21 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:05:21.535388 | orchestrator | 2026-03-17 01:05:21 | INFO  | Task 3c22cb80-149b-4fb4-acac-2ce29f028017 is in state STARTED 2026-03-17 01:05:21.536006 | orchestrator | 2026-03-17 01:05:21 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:05:21.536027 | orchestrator | 2026-03-17 01:05:21 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:24.595220 | orchestrator | 2026-03-17 01:05:24 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:05:24.595275 | orchestrator | 2026-03-17 01:05:24 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:05:24.595283 | orchestrator | 2026-03-17 01:05:24 | INFO  | Task 3c22cb80-149b-4fb4-acac-2ce29f028017 is in state STARTED 2026-03-17 01:05:24.595289 | orchestrator | 2026-03-17 01:05:24 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:05:24.595295 | orchestrator | 2026-03-17 01:05:24 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:27.682452 | orchestrator | 2026-03-17 01:05:27 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:05:27.682514 | orchestrator | 2026-03-17 01:05:27 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:05:27.682524 | orchestrator | 2026-03-17 01:05:27 | INFO  | Task 3c22cb80-149b-4fb4-acac-2ce29f028017 is in state STARTED 2026-03-17 01:05:27.682529 | orchestrator | 2026-03-17 01:05:27 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:05:27.682534 | orchestrator | 2026-03-17 01:05:27 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:30.645520 | orchestrator | 2026-03-17 01:05:30 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:05:30.645850 | orchestrator | 2026-03-17 01:05:30 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:05:30.646392 | orchestrator | 2026-03-17 01:05:30 | INFO  | Task 3c22cb80-149b-4fb4-acac-2ce29f028017 is in state STARTED 2026-03-17 01:05:30.646986 | orchestrator | 2026-03-17 01:05:30 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:05:30.647010 | orchestrator | 2026-03-17 01:05:30 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:33.681001 | orchestrator | 2026-03-17 01:05:33 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:05:33.681083 | orchestrator | 2026-03-17 01:05:33 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:05:33.684424 | orchestrator | 2026-03-17 01:05:33 | INFO  | Task 3c22cb80-149b-4fb4-acac-2ce29f028017 is in state STARTED 2026-03-17 01:05:33.684817 | orchestrator | 2026-03-17 01:05:33 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:05:33.684847 | orchestrator | 2026-03-17 01:05:33 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:36.712138 | orchestrator | 2026-03-17 01:05:36 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:05:36.712590 | orchestrator | 2026-03-17 01:05:36 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:05:36.713421 | orchestrator | 2026-03-17 01:05:36 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:05:36.714110 | orchestrator | 2026-03-17 01:05:36 | INFO  | Task 3c22cb80-149b-4fb4-acac-2ce29f028017 is in state SUCCESS 2026-03-17 01:05:36.714544 | orchestrator | 2026-03-17 01:05:36 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:05:36.714660 | orchestrator | 2026-03-17 01:05:36 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:39.762678 | orchestrator | 2026-03-17 01:05:39 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:05:39.763583 | orchestrator | 2026-03-17 01:05:39 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:05:39.764867 | orchestrator | 2026-03-17 01:05:39 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:05:39.766325 | orchestrator | 2026-03-17 01:05:39 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:05:39.766367 | orchestrator | 2026-03-17 01:05:39 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:42.805256 | orchestrator | 2026-03-17 01:05:42 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:05:42.806139 | orchestrator | 2026-03-17 01:05:42 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:05:42.808421 | orchestrator | 2026-03-17 01:05:42 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:05:42.809467 | orchestrator | 2026-03-17 01:05:42 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:05:42.809513 | orchestrator | 2026-03-17 01:05:42 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:45.840266 | orchestrator | 2026-03-17 01:05:45 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:05:45.840824 | orchestrator | 2026-03-17 01:05:45 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:05:45.842125 | orchestrator | 2026-03-17 01:05:45 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:05:45.843055 | orchestrator | 2026-03-17 01:05:45 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:05:45.843100 | orchestrator | 2026-03-17 01:05:45 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:48.874805 | orchestrator | 2026-03-17 01:05:48 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:05:48.875380 | orchestrator | 2026-03-17 01:05:48 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:05:48.877228 | orchestrator | 2026-03-17 01:05:48 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:05:48.878166 | orchestrator | 2026-03-17 01:05:48 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:05:48.878201 | orchestrator | 2026-03-17 01:05:48 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:51.923808 | orchestrator | 2026-03-17 01:05:51 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:05:51.924754 | orchestrator | 2026-03-17 01:05:51 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:05:51.926226 | orchestrator | 2026-03-17 01:05:51 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:05:51.927973 | orchestrator | 2026-03-17 01:05:51 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:05:51.928021 | orchestrator | 2026-03-17 01:05:51 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:54.968477 | orchestrator | 2026-03-17 01:05:54 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:05:54.969902 | orchestrator | 2026-03-17 01:05:54 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:05:54.971651 | orchestrator | 2026-03-17 01:05:54 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:05:54.972825 | orchestrator | 2026-03-17 01:05:54 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:05:54.972956 | orchestrator | 2026-03-17 01:05:54 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:58.011605 | orchestrator | 2026-03-17 01:05:58 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:05:58.011661 | orchestrator | 2026-03-17 01:05:58 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:05:58.012942 | orchestrator | 2026-03-17 01:05:58 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:05:58.014815 | orchestrator | 2026-03-17 01:05:58 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:05:58.014855 | orchestrator | 2026-03-17 01:05:58 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:01.125288 | orchestrator | 2026-03-17 01:06:01 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:06:01.125347 | orchestrator | 2026-03-17 01:06:01 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:06:01.125355 | orchestrator | 2026-03-17 01:06:01 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:06:01.125361 | orchestrator | 2026-03-17 01:06:01 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:06:01.125368 | orchestrator | 2026-03-17 01:06:01 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:04.094641 | orchestrator | 2026-03-17 01:06:04 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:06:04.095215 | orchestrator | 2026-03-17 01:06:04 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:06:04.095817 | orchestrator | 2026-03-17 01:06:04 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:06:04.096374 | orchestrator | 2026-03-17 01:06:04 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:06:04.096389 | orchestrator | 2026-03-17 01:06:04 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:07.123444 | orchestrator | 2026-03-17 01:06:07 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:06:07.124026 | orchestrator | 2026-03-17 01:06:07 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:06:07.124745 | orchestrator | 2026-03-17 01:06:07 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:06:07.125468 | orchestrator | 2026-03-17 01:06:07 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:06:07.125491 | orchestrator | 2026-03-17 01:06:07 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:10.149947 | orchestrator | 2026-03-17 01:06:10 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:06:10.151611 | orchestrator | 2026-03-17 01:06:10 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:06:10.153041 | orchestrator | 2026-03-17 01:06:10 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state STARTED 2026-03-17 01:06:10.154775 | orchestrator | 2026-03-17 01:06:10 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:06:10.154906 | orchestrator | 2026-03-17 01:06:10 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:13.182066 | orchestrator | 2026-03-17 01:06:13 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:06:13.182122 | orchestrator | 2026-03-17 01:06:13 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:06:13.182939 | orchestrator | 2026-03-17 01:06:13 | INFO  | Task 4940c2ba-fed5-4dda-a4d4-ac585737c7e8 is in state SUCCESS 2026-03-17 01:06:13.183697 | orchestrator | 2026-03-17 01:06:13.183722 | orchestrator | 2026-03-17 01:06:13.183727 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:06:13.183731 | orchestrator | 2026-03-17 01:06:13.183767 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:06:13.183775 | orchestrator | Tuesday 17 March 2026 01:05:02 +0000 (0:00:00.291) 0:00:00.291 ********* 2026-03-17 01:06:13.183781 | orchestrator | ok: [testbed-manager] 2026-03-17 01:06:13.183787 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:06:13.183793 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:06:13.183799 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:06:13.183804 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:06:13.183810 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:06:13.183815 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:06:13.183818 | orchestrator | 2026-03-17 01:06:13.183822 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:06:13.183826 | orchestrator | Tuesday 17 March 2026 01:05:02 +0000 (0:00:00.869) 0:00:01.160 ********* 2026-03-17 01:06:13.183830 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-17 01:06:13.183834 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-17 01:06:13.183837 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-17 01:06:13.183841 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-17 01:06:13.183845 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-17 01:06:13.183848 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-17 01:06:13.183863 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-17 01:06:13.183867 | orchestrator | 2026-03-17 01:06:13.183871 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-17 01:06:13.183878 | orchestrator | 2026-03-17 01:06:13.183882 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-17 01:06:13.183886 | orchestrator | Tuesday 17 March 2026 01:05:03 +0000 (0:00:00.972) 0:00:02.133 ********* 2026-03-17 01:06:13.183890 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:06:13.183895 | orchestrator | 2026-03-17 01:06:13.183899 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-03-17 01:06:13.183902 | orchestrator | Tuesday 17 March 2026 01:05:07 +0000 (0:00:03.402) 0:00:05.536 ********* 2026-03-17 01:06:13.183906 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-03-17 01:06:13.183910 | orchestrator | 2026-03-17 01:06:13.183913 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-03-17 01:06:13.183917 | orchestrator | Tuesday 17 March 2026 01:05:11 +0000 (0:00:03.850) 0:00:09.386 ********* 2026-03-17 01:06:13.183921 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-17 01:06:13.183926 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-17 01:06:13.183930 | orchestrator | 2026-03-17 01:06:13.183934 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-17 01:06:13.183937 | orchestrator | Tuesday 17 March 2026 01:05:16 +0000 (0:00:05.850) 0:00:15.236 ********* 2026-03-17 01:06:13.183941 | orchestrator | ok: [testbed-manager] => (item=service) 2026-03-17 01:06:13.183944 | orchestrator | 2026-03-17 01:06:13.183948 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-17 01:06:13.183952 | orchestrator | Tuesday 17 March 2026 01:05:19 +0000 (0:00:02.738) 0:00:17.975 ********* 2026-03-17 01:06:13.183955 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-17 01:06:13.183959 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-03-17 01:06:13.183962 | orchestrator | 2026-03-17 01:06:13.183966 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-17 01:06:13.183970 | orchestrator | Tuesday 17 March 2026 01:05:24 +0000 (0:00:04.439) 0:00:22.414 ********* 2026-03-17 01:06:13.183973 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-03-17 01:06:13.183977 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-03-17 01:06:13.183980 | orchestrator | 2026-03-17 01:06:13.183984 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-03-17 01:06:13.183988 | orchestrator | Tuesday 17 March 2026 01:05:29 +0000 (0:00:05.859) 0:00:28.274 ********* 2026-03-17 01:06:13.183991 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-03-17 01:06:13.183995 | orchestrator | 2026-03-17 01:06:13.183998 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:06:13.184002 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:06:13.184006 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:06:13.184010 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:06:13.184013 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:06:13.184017 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:06:13.184029 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:06:13.184032 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:06:13.184035 | orchestrator | 2026-03-17 01:06:13.184038 | orchestrator | 2026-03-17 01:06:13.184041 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:06:13.184044 | orchestrator | Tuesday 17 March 2026 01:05:33 +0000 (0:00:03.980) 0:00:32.254 ********* 2026-03-17 01:06:13.184048 | orchestrator | =============================================================================== 2026-03-17 01:06:13.184051 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.86s 2026-03-17 01:06:13.184054 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.85s 2026-03-17 01:06:13.184057 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.44s 2026-03-17 01:06:13.184060 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 3.98s 2026-03-17 01:06:13.184085 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.85s 2026-03-17 01:06:13.184097 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 3.40s 2026-03-17 01:06:13.184102 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.74s 2026-03-17 01:06:13.184107 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.97s 2026-03-17 01:06:13.184116 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.87s 2026-03-17 01:06:13.184143 | orchestrator | 2026-03-17 01:06:13.184731 | orchestrator | 2026-03-17 01:06:13.184753 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:06:13.184760 | orchestrator | 2026-03-17 01:06:13.184766 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:06:13.184772 | orchestrator | Tuesday 17 March 2026 01:04:17 +0000 (0:00:00.228) 0:00:00.228 ********* 2026-03-17 01:06:13.184777 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:06:13.184783 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:06:13.184899 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:06:13.184907 | orchestrator | 2026-03-17 01:06:13.184910 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:06:13.184913 | orchestrator | Tuesday 17 March 2026 01:04:18 +0000 (0:00:00.268) 0:00:00.497 ********* 2026-03-17 01:06:13.184916 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-17 01:06:13.184920 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-17 01:06:13.184923 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-17 01:06:13.184975 | orchestrator | 2026-03-17 01:06:13.184979 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-17 01:06:13.184982 | orchestrator | 2026-03-17 01:06:13.184985 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-17 01:06:13.184989 | orchestrator | Tuesday 17 March 2026 01:04:18 +0000 (0:00:00.316) 0:00:00.813 ********* 2026-03-17 01:06:13.184992 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:06:13.184996 | orchestrator | 2026-03-17 01:06:13.184999 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-03-17 01:06:13.185002 | orchestrator | Tuesday 17 March 2026 01:04:19 +0000 (0:00:00.479) 0:00:01.292 ********* 2026-03-17 01:06:13.185006 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-17 01:06:13.185009 | orchestrator | 2026-03-17 01:06:13.185012 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-03-17 01:06:13.185015 | orchestrator | Tuesday 17 March 2026 01:04:22 +0000 (0:00:03.595) 0:00:04.888 ********* 2026-03-17 01:06:13.185018 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-17 01:06:13.185029 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-17 01:06:13.185033 | orchestrator | 2026-03-17 01:06:13.185036 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-17 01:06:13.185041 | orchestrator | Tuesday 17 March 2026 01:04:29 +0000 (0:00:06.657) 0:00:11.546 ********* 2026-03-17 01:06:13.185046 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-17 01:06:13.185052 | orchestrator | 2026-03-17 01:06:13.185060 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-17 01:06:13.185064 | orchestrator | Tuesday 17 March 2026 01:04:32 +0000 (0:00:03.148) 0:00:14.695 ********* 2026-03-17 01:06:13.185069 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-17 01:06:13.185074 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-17 01:06:13.185079 | orchestrator | 2026-03-17 01:06:13.185084 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-17 01:06:13.185089 | orchestrator | Tuesday 17 March 2026 01:04:36 +0000 (0:00:03.794) 0:00:18.489 ********* 2026-03-17 01:06:13.185094 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-17 01:06:13.185099 | orchestrator | 2026-03-17 01:06:13.185104 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-03-17 01:06:13.185118 | orchestrator | Tuesday 17 March 2026 01:04:39 +0000 (0:00:03.239) 0:00:21.728 ********* 2026-03-17 01:06:13.185123 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-17 01:06:13.185128 | orchestrator | 2026-03-17 01:06:13.185133 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-17 01:06:13.185138 | orchestrator | Tuesday 17 March 2026 01:04:43 +0000 (0:00:03.825) 0:00:25.554 ********* 2026-03-17 01:06:13.185144 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:06:13.185149 | orchestrator | 2026-03-17 01:06:13.185154 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-17 01:06:13.185159 | orchestrator | Tuesday 17 March 2026 01:04:46 +0000 (0:00:03.673) 0:00:29.227 ********* 2026-03-17 01:06:13.185163 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:06:13.185168 | orchestrator | 2026-03-17 01:06:13.185173 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-17 01:06:13.185179 | orchestrator | Tuesday 17 March 2026 01:04:50 +0000 (0:00:03.911) 0:00:33.138 ********* 2026-03-17 01:06:13.185184 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:06:13.185189 | orchestrator | 2026-03-17 01:06:13.185194 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-17 01:06:13.185199 | orchestrator | Tuesday 17 March 2026 01:04:54 +0000 (0:00:03.189) 0:00:36.328 ********* 2026-03-17 01:06:13.185214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:13.185221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:13.185230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:13.185233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:13.185240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:13.185251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:13.185258 | orchestrator | 2026-03-17 01:06:13.185263 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-17 01:06:13.185272 | orchestrator | Tuesday 17 March 2026 01:04:55 +0000 (0:00:01.684) 0:00:38.012 ********* 2026-03-17 01:06:13.185277 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:13.185282 | orchestrator | 2026-03-17 01:06:13.185287 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-17 01:06:13.185291 | orchestrator | Tuesday 17 March 2026 01:04:55 +0000 (0:00:00.159) 0:00:38.172 ********* 2026-03-17 01:06:13.185296 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:13.185301 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:13.185306 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:13.185310 | orchestrator | 2026-03-17 01:06:13.185316 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-17 01:06:13.185321 | orchestrator | Tuesday 17 March 2026 01:04:56 +0000 (0:00:00.530) 0:00:38.702 ********* 2026-03-17 01:06:13.185326 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 01:06:13.185331 | orchestrator | 2026-03-17 01:06:13.185336 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-17 01:06:13.185424 | orchestrator | Tuesday 17 March 2026 01:04:57 +0000 (0:00:00.815) 0:00:39.518 ********* 2026-03-17 01:06:13.185433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:13.185439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:13.185445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:13.185456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:13.185469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:13.185475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:13.185480 | orchestrator | 2026-03-17 01:06:13.185485 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-17 01:06:13.185490 | orchestrator | Tuesday 17 March 2026 01:04:59 +0000 (0:00:02.388) 0:00:41.906 ********* 2026-03-17 01:06:13.185545 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:06:13.185553 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:06:13.185560 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:06:13.185564 | orchestrator | 2026-03-17 01:06:13.185569 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-17 01:06:13.185574 | orchestrator | Tuesday 17 March 2026 01:04:59 +0000 (0:00:00.343) 0:00:42.249 ********* 2026-03-17 01:06:13.185579 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:06:13.185584 | orchestrator | 2026-03-17 01:06:13.185589 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-17 01:06:13.185594 | orchestrator | Tuesday 17 March 2026 01:05:00 +0000 (0:00:00.866) 0:00:43.116 ********* 2026-03-17 01:06:13.185599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:13.185614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:13.185619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:13.185624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:13.185629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:13.185634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:13.185643 | orchestrator | 2026-03-17 01:06:13.185648 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-17 01:06:13.185653 | orchestrator | Tuesday 17 March 2026 01:05:03 +0000 (0:00:02.651) 0:00:45.768 ********* 2026-03-17 01:06:13.185662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-17 01:06:13.185669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:06:13.185672 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:13.185676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-17 01:06:13.185679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:06:13.185685 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:13.185688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-17 01:06:13.185695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:06:13.185698 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:13.185701 | orchestrator | 2026-03-17 01:06:13.185704 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-17 01:06:13.185707 | orchestrator | Tuesday 17 March 2026 01:05:05 +0000 (0:00:01.837) 0:00:47.606 ********* 2026-03-17 01:06:13.185711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-17 01:06:13.185716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:06:13.185721 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:13.185729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-17 01:06:13.185741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:06:13.185747 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:13.185752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-17 01:06:13.185758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:06:13.185763 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:13.185769 | orchestrator | 2026-03-17 01:06:13.185774 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-17 01:06:13.185779 | orchestrator | Tuesday 17 March 2026 01:05:07 +0000 (0:00:02.617) 0:00:50.224 ********* 2026-03-17 01:06:13.185784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:13.185793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:13.185802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:13.185808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:13.185813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:13.185818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:13.185826 | orchestrator | 2026-03-17 01:06:13.185832 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-17 01:06:13.185837 | orchestrator | Tuesday 17 March 2026 01:05:10 +0000 (0:00:02.657) 0:00:52.881 ********* 2026-03-17 01:06:13.185843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:13.185851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:13.185857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:13.185862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:13.185875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:13.185881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:13.185885 | orchestrator | 2026-03-17 01:06:13.185890 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-17 01:06:13.185897 | orchestrator | Tuesday 17 March 2026 01:05:17 +0000 (0:00:06.823) 0:00:59.705 ********* 2026-03-17 01:06:13.185903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-17 01:06:13.185908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:06:13.185914 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:13.185919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-17 01:06:13.185928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:06:13.185933 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:13.185943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-17 01:06:13.185948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:06:13.185953 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:13.185958 | orchestrator | 2026-03-17 01:06:13.185963 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-03-17 01:06:13.185968 | orchestrator | Tuesday 17 March 2026 01:05:19 +0000 (0:00:01.665) 0:01:01.370 ********* 2026-03-17 01:06:13.185974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:13.185982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:13.185988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-17 01:06:13.185996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:13.186001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:13.186009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:06:13.186041 | orchestrator | 2026-03-17 01:06:13.186047 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-17 01:06:13.186052 | orchestrator | Tuesday 17 March 2026 01:05:22 +0000 (0:00:03.039) 0:01:04.410 ********* 2026-03-17 01:06:13.186058 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:13.186064 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:13.186070 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:13.186076 | orchestrator | 2026-03-17 01:06:13.186082 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-17 01:06:13.186086 | orchestrator | Tuesday 17 March 2026 01:05:22 +0000 (0:00:00.524) 0:01:04.935 ********* 2026-03-17 01:06:13.186090 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:06:13.186094 | orchestrator | 2026-03-17 01:06:13.186097 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-17 01:06:13.186101 | orchestrator | Tuesday 17 March 2026 01:05:24 +0000 (0:00:02.316) 0:01:07.252 ********* 2026-03-17 01:06:13.186104 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:06:13.186108 | orchestrator | 2026-03-17 01:06:13.186112 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-17 01:06:13.186115 | orchestrator | Tuesday 17 March 2026 01:05:27 +0000 (0:00:02.508) 0:01:09.761 ********* 2026-03-17 01:06:13.186119 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:06:13.186122 | orchestrator | 2026-03-17 01:06:13.186126 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-17 01:06:13.186129 | orchestrator | Tuesday 17 March 2026 01:05:44 +0000 (0:00:16.735) 0:01:26.496 ********* 2026-03-17 01:06:13.186133 | orchestrator | 2026-03-17 01:06:13.186137 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-17 01:06:13.186140 | orchestrator | Tuesday 17 March 2026 01:05:44 +0000 (0:00:00.060) 0:01:26.557 ********* 2026-03-17 01:06:13.186144 | orchestrator | 2026-03-17 01:06:13.186148 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-17 01:06:13.186151 | orchestrator | Tuesday 17 March 2026 01:05:44 +0000 (0:00:00.057) 0:01:26.614 ********* 2026-03-17 01:06:13.186155 | orchestrator | 2026-03-17 01:06:13.186158 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-17 01:06:13.186162 | orchestrator | Tuesday 17 March 2026 01:05:44 +0000 (0:00:00.061) 0:01:26.676 ********* 2026-03-17 01:06:13.186166 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:06:13.186171 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:06:13.186176 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:06:13.186181 | orchestrator | 2026-03-17 01:06:13.186186 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-17 01:06:13.186190 | orchestrator | Tuesday 17 March 2026 01:05:59 +0000 (0:00:14.860) 0:01:41.536 ********* 2026-03-17 01:06:13.186195 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:06:13.186201 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:06:13.186206 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:06:13.186212 | orchestrator | 2026-03-17 01:06:13.186221 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:06:13.186228 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-17 01:06:13.186236 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-17 01:06:13.186239 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-17 01:06:13.186243 | orchestrator | 2026-03-17 01:06:13.186247 | orchestrator | 2026-03-17 01:06:13.186250 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:06:13.186254 | orchestrator | Tuesday 17 March 2026 01:06:10 +0000 (0:00:11.504) 0:01:53.041 ********* 2026-03-17 01:06:13.186258 | orchestrator | =============================================================================== 2026-03-17 01:06:13.186261 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.74s 2026-03-17 01:06:13.186265 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 14.86s 2026-03-17 01:06:13.186268 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 11.50s 2026-03-17 01:06:13.186272 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 6.82s 2026-03-17 01:06:13.186276 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.66s 2026-03-17 01:06:13.186281 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.91s 2026-03-17 01:06:13.186286 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.83s 2026-03-17 01:06:13.186294 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.79s 2026-03-17 01:06:13.186300 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.67s 2026-03-17 01:06:13.186305 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.60s 2026-03-17 01:06:13.186310 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.24s 2026-03-17 01:06:13.186315 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.19s 2026-03-17 01:06:13.186319 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.15s 2026-03-17 01:06:13.186324 | orchestrator | magnum : Check magnum containers ---------------------------------------- 3.04s 2026-03-17 01:06:13.186330 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.66s 2026-03-17 01:06:13.186335 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.65s 2026-03-17 01:06:13.186340 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 2.62s 2026-03-17 01:06:13.186346 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.51s 2026-03-17 01:06:13.186351 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.39s 2026-03-17 01:06:13.186357 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.32s 2026-03-17 01:06:13.186362 | orchestrator | 2026-03-17 01:06:13 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:06:13.186368 | orchestrator | 2026-03-17 01:06:13 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:06:13.186373 | orchestrator | 2026-03-17 01:06:13 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:16.215573 | orchestrator | 2026-03-17 01:06:16 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:06:16.215633 | orchestrator | 2026-03-17 01:06:16 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state STARTED 2026-03-17 01:06:16.216170 | orchestrator | 2026-03-17 01:06:16 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:06:16.216957 | orchestrator | 2026-03-17 01:06:16 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:06:16.216979 | orchestrator | 2026-03-17 01:06:16 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:19.249653 | orchestrator | 2026-03-17 01:06:19 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:06:19.251954 | orchestrator | 2026-03-17 01:06:19 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:06:19.252003 | orchestrator | 2026-03-17 01:06:19 | INFO  | Task 724aa64e-a2ff-4c94-bc2c-22094123cc5c is in state SUCCESS 2026-03-17 01:06:19.252351 | orchestrator | 2026-03-17 01:06:19.253483 | orchestrator | 2026-03-17 01:06:19.253549 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:06:19.253557 | orchestrator | 2026-03-17 01:06:19.253563 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:06:19.253567 | orchestrator | Tuesday 17 March 2026 01:02:07 +0000 (0:00:00.224) 0:00:00.224 ********* 2026-03-17 01:06:19.253570 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:06:19.253574 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:06:19.253578 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:06:19.253581 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:06:19.253584 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:06:19.253588 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:06:19.253591 | orchestrator | 2026-03-17 01:06:19.253594 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:06:19.253598 | orchestrator | Tuesday 17 March 2026 01:02:08 +0000 (0:00:00.629) 0:00:00.854 ********* 2026-03-17 01:06:19.253601 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-17 01:06:19.253604 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-17 01:06:19.253610 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-17 01:06:19.253615 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-17 01:06:19.253620 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-17 01:06:19.253625 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-17 01:06:19.253629 | orchestrator | 2026-03-17 01:06:19.253635 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-17 01:06:19.253640 | orchestrator | 2026-03-17 01:06:19.253645 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-17 01:06:19.253650 | orchestrator | Tuesday 17 March 2026 01:02:08 +0000 (0:00:00.572) 0:00:01.426 ********* 2026-03-17 01:06:19.253656 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:06:19.253662 | orchestrator | 2026-03-17 01:06:19.253666 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-17 01:06:19.253671 | orchestrator | Tuesday 17 March 2026 01:02:09 +0000 (0:00:00.941) 0:00:02.367 ********* 2026-03-17 01:06:19.253676 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:06:19.253681 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:06:19.253686 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:06:19.253691 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:06:19.253697 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:06:19.253702 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:06:19.253707 | orchestrator | 2026-03-17 01:06:19.253712 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-17 01:06:19.253718 | orchestrator | Tuesday 17 March 2026 01:02:10 +0000 (0:00:01.305) 0:00:03.673 ********* 2026-03-17 01:06:19.253723 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:06:19.253729 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:06:19.253734 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:06:19.253740 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:06:19.253744 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:06:19.253749 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:06:19.253754 | orchestrator | 2026-03-17 01:06:19.253759 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-17 01:06:19.253765 | orchestrator | Tuesday 17 March 2026 01:02:11 +0000 (0:00:01.065) 0:00:04.739 ********* 2026-03-17 01:06:19.253784 | orchestrator | ok: [testbed-node-0] => { 2026-03-17 01:06:19.253790 | orchestrator |  "changed": false, 2026-03-17 01:06:19.253795 | orchestrator |  "msg": "All assertions passed" 2026-03-17 01:06:19.253800 | orchestrator | } 2026-03-17 01:06:19.253805 | orchestrator | ok: [testbed-node-1] => { 2026-03-17 01:06:19.253811 | orchestrator |  "changed": false, 2026-03-17 01:06:19.253816 | orchestrator |  "msg": "All assertions passed" 2026-03-17 01:06:19.253821 | orchestrator | } 2026-03-17 01:06:19.253826 | orchestrator | ok: [testbed-node-2] => { 2026-03-17 01:06:19.253831 | orchestrator |  "changed": false, 2026-03-17 01:06:19.253836 | orchestrator |  "msg": "All assertions passed" 2026-03-17 01:06:19.253841 | orchestrator | } 2026-03-17 01:06:19.253846 | orchestrator | ok: [testbed-node-3] => { 2026-03-17 01:06:19.253851 | orchestrator |  "changed": false, 2026-03-17 01:06:19.253866 | orchestrator |  "msg": "All assertions passed" 2026-03-17 01:06:19.253871 | orchestrator | } 2026-03-17 01:06:19.253876 | orchestrator | ok: [testbed-node-4] => { 2026-03-17 01:06:19.253886 | orchestrator |  "changed": false, 2026-03-17 01:06:19.254145 | orchestrator |  "msg": "All assertions passed" 2026-03-17 01:06:19.254162 | orchestrator | } 2026-03-17 01:06:19.254168 | orchestrator | ok: [testbed-node-5] => { 2026-03-17 01:06:19.254203 | orchestrator |  "changed": false, 2026-03-17 01:06:19.254210 | orchestrator |  "msg": "All assertions passed" 2026-03-17 01:06:19.254215 | orchestrator | } 2026-03-17 01:06:19.254221 | orchestrator | 2026-03-17 01:06:19.254228 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-17 01:06:19.254234 | orchestrator | Tuesday 17 March 2026 01:02:12 +0000 (0:00:00.641) 0:00:05.380 ********* 2026-03-17 01:06:19.254240 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:19.254245 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:19.254251 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:19.254256 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:19.254261 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:19.254266 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:19.254272 | orchestrator | 2026-03-17 01:06:19.254277 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-03-17 01:06:19.254283 | orchestrator | Tuesday 17 March 2026 01:02:13 +0000 (0:00:00.511) 0:00:05.892 ********* 2026-03-17 01:06:19.254288 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-17 01:06:19.254294 | orchestrator | 2026-03-17 01:06:19.254300 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-03-17 01:06:19.254305 | orchestrator | Tuesday 17 March 2026 01:02:16 +0000 (0:00:03.550) 0:00:09.442 ********* 2026-03-17 01:06:19.254311 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-17 01:06:19.254318 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-17 01:06:19.254324 | orchestrator | 2026-03-17 01:06:19.254340 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-17 01:06:19.254346 | orchestrator | Tuesday 17 March 2026 01:02:24 +0000 (0:00:08.144) 0:00:17.587 ********* 2026-03-17 01:06:19.254352 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-17 01:06:19.254358 | orchestrator | 2026-03-17 01:06:19.254363 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-17 01:06:19.254369 | orchestrator | Tuesday 17 March 2026 01:02:28 +0000 (0:00:03.767) 0:00:21.354 ********* 2026-03-17 01:06:19.254374 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-17 01:06:19.254380 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-17 01:06:19.254579 | orchestrator | 2026-03-17 01:06:19.254586 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-17 01:06:19.254592 | orchestrator | Tuesday 17 March 2026 01:02:32 +0000 (0:00:04.179) 0:00:25.533 ********* 2026-03-17 01:06:19.254598 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-17 01:06:19.254613 | orchestrator | 2026-03-17 01:06:19.254619 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-03-17 01:06:19.254625 | orchestrator | Tuesday 17 March 2026 01:02:36 +0000 (0:00:03.721) 0:00:29.255 ********* 2026-03-17 01:06:19.254662 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-17 01:06:19.254667 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-17 01:06:19.254672 | orchestrator | 2026-03-17 01:06:19.254677 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-17 01:06:19.254681 | orchestrator | Tuesday 17 March 2026 01:02:44 +0000 (0:00:07.860) 0:00:37.116 ********* 2026-03-17 01:06:19.254686 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:19.254691 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:19.254695 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:19.254700 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:19.254705 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:19.254710 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:19.254716 | orchestrator | 2026-03-17 01:06:19.254721 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-17 01:06:19.254726 | orchestrator | Tuesday 17 March 2026 01:02:44 +0000 (0:00:00.628) 0:00:37.744 ********* 2026-03-17 01:06:19.254732 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:19.254737 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:19.254742 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:19.254748 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:19.254754 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:19.254759 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:19.254764 | orchestrator | 2026-03-17 01:06:19.254770 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-17 01:06:19.254775 | orchestrator | Tuesday 17 March 2026 01:02:47 +0000 (0:00:02.134) 0:00:39.879 ********* 2026-03-17 01:06:19.254781 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:06:19.254786 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:06:19.254792 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:06:19.254797 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:06:19.254802 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:06:19.254807 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:06:19.254813 | orchestrator | 2026-03-17 01:06:19.254818 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-17 01:06:19.254824 | orchestrator | Tuesday 17 March 2026 01:02:48 +0000 (0:00:01.064) 0:00:40.944 ********* 2026-03-17 01:06:19.254869 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:19.254877 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:19.254882 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:19.254888 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:19.254893 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:19.254899 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:19.254904 | orchestrator | 2026-03-17 01:06:19.254910 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-17 01:06:19.254916 | orchestrator | Tuesday 17 March 2026 01:02:51 +0000 (0:00:03.257) 0:00:44.201 ********* 2026-03-17 01:06:19.254923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:19.254963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:19.254971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:19.254978 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:06:19.254985 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:06:19.254991 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:06:19.255001 | orchestrator | 2026-03-17 01:06:19.255007 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-17 01:06:19.255013 | orchestrator | Tuesday 17 March 2026 01:02:54 +0000 (0:00:02.708) 0:00:46.911 ********* 2026-03-17 01:06:19.255018 | orchestrator | [WARNING]: Skipped 2026-03-17 01:06:19.255024 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-17 01:06:19.255029 | orchestrator | due to this access issue: 2026-03-17 01:06:19.255034 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-17 01:06:19.255039 | orchestrator | a directory 2026-03-17 01:06:19.255044 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 01:06:19.255049 | orchestrator | 2026-03-17 01:06:19.255070 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-17 01:06:19.255077 | orchestrator | Tuesday 17 March 2026 01:02:54 +0000 (0:00:00.740) 0:00:47.652 ********* 2026-03-17 01:06:19.255082 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:06:19.255088 | orchestrator | 2026-03-17 01:06:19.255094 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-17 01:06:19.255099 | orchestrator | Tuesday 17 March 2026 01:02:55 +0000 (0:00:01.132) 0:00:48.785 ********* 2026-03-17 01:06:19.255104 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:06:19.255111 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:06:19.255117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:19.255129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:19.255151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:19.255158 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:06:19.255164 | orchestrator | 2026-03-17 01:06:19.255170 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-17 01:06:19.255175 | orchestrator | Tuesday 17 March 2026 01:02:59 +0000 (0:00:03.222) 0:00:52.007 ********* 2026-03-17 01:06:19.255181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:19.255186 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:19.255194 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:19.255199 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:19.255204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:19.255210 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:19.255231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:19.255238 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:19.255243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:19.255249 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:19.255254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:19.255264 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:19.255270 | orchestrator | 2026-03-17 01:06:19.255275 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-17 01:06:19.255281 | orchestrator | Tuesday 17 March 2026 01:03:02 +0000 (0:00:02.964) 0:00:54.971 ********* 2026-03-17 01:06:19.255287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:19.255292 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:19.255314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:19.255323 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:19.255329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:19.255334 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:19.255340 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:19.255349 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:19.255355 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:19.255360 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:19.255365 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:19.255371 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:19.255376 | orchestrator | 2026-03-17 01:06:19.255381 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-17 01:06:19.255389 | orchestrator | Tuesday 17 March 2026 01:03:04 +0000 (0:00:02.384) 0:00:57.356 ********* 2026-03-17 01:06:19.255395 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:19.255401 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:19.255406 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:19.255411 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:19.255417 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:19.255422 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:19.255428 | orchestrator | 2026-03-17 01:06:19.255434 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-17 01:06:19.255439 | orchestrator | Tuesday 17 March 2026 01:03:06 +0000 (0:00:02.075) 0:00:59.432 ********* 2026-03-17 01:06:19.255444 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:19.255450 | orchestrator | 2026-03-17 01:06:19.255455 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-17 01:06:19.255460 | orchestrator | Tuesday 17 March 2026 01:03:06 +0000 (0:00:00.135) 0:00:59.567 ********* 2026-03-17 01:06:19.255466 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:19.255471 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:19.255477 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:19.255483 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:19.255501 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:19.255507 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:19.255512 | orchestrator | 2026-03-17 01:06:19.255532 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-17 01:06:19.255539 | orchestrator | Tuesday 17 March 2026 01:03:07 +0000 (0:00:00.637) 0:01:00.205 ********* 2026-03-17 01:06:19.255550 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:19.255556 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:19.255562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:19.255568 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:19.255573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:19.255579 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:19.255590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:19.255596 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:19.255602 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:19.255611 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:19.255617 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:19.255622 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:19.255628 | orchestrator | 2026-03-17 01:06:19.255634 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-17 01:06:19.255640 | orchestrator | Tuesday 17 March 2026 01:03:09 +0000 (0:00:02.505) 0:01:02.712 ********* 2026-03-17 01:06:19.255646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:19.255655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:19.255661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:19.255680 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:06:19.255687 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:06:19.255693 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:06:19.255699 | orchestrator | 2026-03-17 01:06:19.255705 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-17 01:06:19.255710 | orchestrator | Tuesday 17 March 2026 01:03:14 +0000 (0:00:04.147) 0:01:06.859 ********* 2026-03-17 01:06:19.255719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:19.255726 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:06:19.255737 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:06:19.255744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:19.255749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:19.255758 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:06:19.255772 | orchestrator | 2026-03-17 01:06:19.255779 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-17 01:06:19.255784 | orchestrator | Tuesday 17 March 2026 01:03:21 +0000 (0:00:06.953) 0:01:13.813 ********* 2026-03-17 01:06:19.255790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:19.255796 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:19.255802 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:19.255808 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:19.255814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:19.255819 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:19.255825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:19.255835 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:19.255844 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:19.255850 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:19.255879 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:19.255886 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:19.255892 | orchestrator | 2026-03-17 01:06:19.255898 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-17 01:06:19.255905 | orchestrator | Tuesday 17 March 2026 01:03:23 +0000 (0:00:02.745) 0:01:16.558 ********* 2026-03-17 01:06:19.255911 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:19.255917 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:19.255922 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:19.255928 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:06:19.255934 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:06:19.255939 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:06:19.255945 | orchestrator | 2026-03-17 01:06:19.255951 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-17 01:06:19.255956 | orchestrator | Tuesday 17 March 2026 01:03:26 +0000 (0:00:02.741) 0:01:19.300 ********* 2026-03-17 01:06:19.255962 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:19.255967 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:19.255973 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:19.255982 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:19.255992 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:19.255998 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:19.256004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:19.256012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:19.256018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:19.256024 | orchestrator | 2026-03-17 01:06:19.256029 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-03-17 01:06:19.256039 | orchestrator | Tuesday 17 March 2026 01:03:30 +0000 (0:00:03.920) 0:01:23.221 ********* 2026-03-17 01:06:19.256045 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:19.256050 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:19.256056 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:19.256062 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:19.256068 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:19.256073 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:19.256079 | orchestrator | 2026-03-17 01:06:19.256085 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-17 01:06:19.256090 | orchestrator | Tuesday 17 March 2026 01:03:32 +0000 (0:00:02.348) 0:01:25.569 ********* 2026-03-17 01:06:19.256096 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:19.256101 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:19.256106 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:19.256111 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:19.256117 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:19.256122 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:19.256128 | orchestrator | 2026-03-17 01:06:19.256133 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-17 01:06:19.256138 | orchestrator | Tuesday 17 March 2026 01:03:35 +0000 (0:00:02.266) 0:01:27.835 ********* 2026-03-17 01:06:19.256147 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:19.256152 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:19.256158 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:19.256163 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:19.256169 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:19.256174 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:19.256179 | orchestrator | 2026-03-17 01:06:19.256185 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-17 01:06:19.256190 | orchestrator | Tuesday 17 March 2026 01:03:36 +0000 (0:00:01.927) 0:01:29.762 ********* 2026-03-17 01:06:19.256195 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:19.256200 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:19.256205 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:19.256210 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:19.256215 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:19.256220 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:19.256226 | orchestrator | 2026-03-17 01:06:19.256231 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-17 01:06:19.256236 | orchestrator | Tuesday 17 March 2026 01:03:38 +0000 (0:00:01.873) 0:01:31.636 ********* 2026-03-17 01:06:19.256241 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:19.256246 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:19.256251 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:19.256256 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:19.256261 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:19.256267 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:19.256272 | orchestrator | 2026-03-17 01:06:19.256278 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-17 01:06:19.256284 | orchestrator | Tuesday 17 March 2026 01:03:40 +0000 (0:00:01.750) 0:01:33.387 ********* 2026-03-17 01:06:19.256290 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:19.256295 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:19.256300 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:19.256306 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:19.256311 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:19.256317 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:19.256322 | orchestrator | 2026-03-17 01:06:19.256328 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-17 01:06:19.256333 | orchestrator | Tuesday 17 March 2026 01:03:42 +0000 (0:00:02.317) 0:01:35.704 ********* 2026-03-17 01:06:19.256343 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-17 01:06:19.256352 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:19.256357 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-17 01:06:19.256363 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:19.256369 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-17 01:06:19.256374 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:19.256380 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-17 01:06:19.256385 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:19.256390 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-17 01:06:19.256395 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:19.256401 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-17 01:06:19.256406 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:19.256411 | orchestrator | 2026-03-17 01:06:19.256417 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-17 01:06:19.256422 | orchestrator | Tuesday 17 March 2026 01:03:45 +0000 (0:00:02.121) 0:01:37.826 ********* 2026-03-17 01:06:19.256428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:19.256435 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:19.256445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:19.256452 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:19.256457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:19.256467 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:19.256476 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:19.256482 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:19.256502 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:19.256509 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:19.256514 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:19.256520 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:19.256526 | orchestrator | 2026-03-17 01:06:19.256531 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-17 01:06:19.256537 | orchestrator | Tuesday 17 March 2026 01:03:46 +0000 (0:00:01.668) 0:01:39.494 ********* 2026-03-17 01:06:19.256546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:19.256557 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:19.256562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:19.256568 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:19.256576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:19.256582 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:19.256588 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:19.256593 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:19.256601 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:19.256607 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:19.256612 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:19.256622 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:19.256627 | orchestrator | 2026-03-17 01:06:19.256632 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-17 01:06:19.256638 | orchestrator | Tuesday 17 March 2026 01:03:48 +0000 (0:00:01.755) 0:01:41.249 ********* 2026-03-17 01:06:19.256643 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:19.256649 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:19.256654 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:19.256660 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:19.256666 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:19.256671 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:19.256676 | orchestrator | 2026-03-17 01:06:19.256682 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-17 01:06:19.256687 | orchestrator | Tuesday 17 March 2026 01:03:50 +0000 (0:00:01.881) 0:01:43.131 ********* 2026-03-17 01:06:19.256693 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:19.256698 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:19.256704 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:19.256709 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:06:19.256715 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:06:19.256720 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:06:19.256726 | orchestrator | 2026-03-17 01:06:19.256734 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-17 01:06:19.256740 | orchestrator | Tuesday 17 March 2026 01:03:53 +0000 (0:00:03.405) 0:01:46.537 ********* 2026-03-17 01:06:19.256746 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:19.256751 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:19.256757 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:19.256762 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:19.256768 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:19.256773 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:19.256778 | orchestrator | 2026-03-17 01:06:19.256783 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-17 01:06:19.256789 | orchestrator | Tuesday 17 March 2026 01:03:56 +0000 (0:00:02.899) 0:01:49.436 ********* 2026-03-17 01:06:19.256795 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:19.256800 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:19.256806 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:19.256811 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:19.256816 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:19.256822 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:19.256827 | orchestrator | 2026-03-17 01:06:19.256833 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-17 01:06:19.256838 | orchestrator | Tuesday 17 March 2026 01:03:58 +0000 (0:00:01.859) 0:01:51.295 ********* 2026-03-17 01:06:19.256844 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:19.256849 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:19.256854 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:19.256860 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:19.256865 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:19.256871 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:19.256876 | orchestrator | 2026-03-17 01:06:19.256882 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-17 01:06:19.256887 | orchestrator | Tuesday 17 March 2026 01:04:01 +0000 (0:00:02.640) 0:01:53.935 ********* 2026-03-17 01:06:19.256892 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:19.256904 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:19.256909 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:19.256915 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:19.256920 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:19.256925 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:19.256931 | orchestrator | 2026-03-17 01:06:19.256936 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-17 01:06:19.256941 | orchestrator | Tuesday 17 March 2026 01:04:03 +0000 (0:00:02.481) 0:01:56.417 ********* 2026-03-17 01:06:19.256946 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:19.256952 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:19.256957 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:19.256963 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:19.256968 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:19.256973 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:19.256979 | orchestrator | 2026-03-17 01:06:19.256984 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-17 01:06:19.256989 | orchestrator | Tuesday 17 March 2026 01:04:06 +0000 (0:00:02.591) 0:01:59.009 ********* 2026-03-17 01:06:19.256995 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:19.257000 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:19.257006 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:19.257011 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:19.257016 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:19.257022 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:19.257027 | orchestrator | 2026-03-17 01:06:19.257033 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-17 01:06:19.257043 | orchestrator | Tuesday 17 March 2026 01:04:08 +0000 (0:00:01.984) 0:02:00.994 ********* 2026-03-17 01:06:19.257048 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:19.257054 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:19.257059 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:19.257064 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:19.257070 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:19.257076 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:19.257081 | orchestrator | 2026-03-17 01:06:19.257086 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-17 01:06:19.257092 | orchestrator | Tuesday 17 March 2026 01:04:11 +0000 (0:00:03.065) 0:02:04.059 ********* 2026-03-17 01:06:19.257097 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-17 01:06:19.257103 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:19.257109 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-17 01:06:19.257114 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:19.257119 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-17 01:06:19.257125 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:19.257131 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-17 01:06:19.257136 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:19.257141 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-17 01:06:19.257147 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:19.257152 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-17 01:06:19.257157 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:19.257163 | orchestrator | 2026-03-17 01:06:19.257168 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-17 01:06:19.257173 | orchestrator | Tuesday 17 March 2026 01:04:13 +0000 (0:00:02.124) 0:02:06.184 ********* 2026-03-17 01:06:19.257182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:19.257191 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:19.257197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:19.257202 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:19.257211 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:19.257217 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:19.257222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-17 01:06:19.257228 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:19.257236 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:19.257246 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:19.257252 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:06:19.257258 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:19.257263 | orchestrator | 2026-03-17 01:06:19.257268 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-03-17 01:06:19.257274 | orchestrator | Tuesday 17 March 2026 01:04:15 +0000 (0:00:01.750) 0:02:07.935 ********* 2026-03-17 01:06:19.257279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:19.257289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:19.257295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-17 01:06:19.257307 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:06:19.257313 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:06:19.257319 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:06:19.257325 | orchestrator | 2026-03-17 01:06:19.257330 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-17 01:06:19.257336 | orchestrator | Tuesday 17 March 2026 01:04:18 +0000 (0:00:02.988) 0:02:10.923 ********* 2026-03-17 01:06:19.257341 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:06:19.257346 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:06:19.257352 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:06:19.257357 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:06:19.257362 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:06:19.257370 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:06:19.257375 | orchestrator | 2026-03-17 01:06:19.257380 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-17 01:06:19.257386 | orchestrator | Tuesday 17 March 2026 01:04:18 +0000 (0:00:00.433) 0:02:11.357 ********* 2026-03-17 01:06:19.257391 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:06:19.257396 | orchestrator | 2026-03-17 01:06:19.257401 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-17 01:06:19.257407 | orchestrator | Tuesday 17 March 2026 01:04:20 +0000 (0:00:02.400) 0:02:13.758 ********* 2026-03-17 01:06:19.257412 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:06:19.257417 | orchestrator | 2026-03-17 01:06:19.257426 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-17 01:06:19.257431 | orchestrator | Tuesday 17 March 2026 01:04:23 +0000 (0:00:02.654) 0:02:16.412 ********* 2026-03-17 01:06:19.257436 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:06:19.257442 | orchestrator | 2026-03-17 01:06:19.257447 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-17 01:06:19.257452 | orchestrator | Tuesday 17 March 2026 01:05:02 +0000 (0:00:38.735) 0:02:55.148 ********* 2026-03-17 01:06:19.257457 | orchestrator | 2026-03-17 01:06:19.257463 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-17 01:06:19.257468 | orchestrator | Tuesday 17 March 2026 01:05:02 +0000 (0:00:00.067) 0:02:55.216 ********* 2026-03-17 01:06:19.257473 | orchestrator | 2026-03-17 01:06:19.257479 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-17 01:06:19.257484 | orchestrator | Tuesday 17 March 2026 01:05:02 +0000 (0:00:00.300) 0:02:55.516 ********* 2026-03-17 01:06:19.257503 | orchestrator | 2026-03-17 01:06:19.257508 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-17 01:06:19.257514 | orchestrator | Tuesday 17 March 2026 01:05:02 +0000 (0:00:00.076) 0:02:55.593 ********* 2026-03-17 01:06:19.257517 | orchestrator | 2026-03-17 01:06:19.257520 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-17 01:06:19.257523 | orchestrator | Tuesday 17 March 2026 01:05:02 +0000 (0:00:00.065) 0:02:55.659 ********* 2026-03-17 01:06:19.257527 | orchestrator | 2026-03-17 01:06:19.257530 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-17 01:06:19.257533 | orchestrator | Tuesday 17 March 2026 01:05:02 +0000 (0:00:00.074) 0:02:55.733 ********* 2026-03-17 01:06:19.257536 | orchestrator | 2026-03-17 01:06:19.257543 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-17 01:06:19.257547 | orchestrator | Tuesday 17 March 2026 01:05:03 +0000 (0:00:00.071) 0:02:55.804 ********* 2026-03-17 01:06:19.257551 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:06:19.257556 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:06:19.257561 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:06:19.257566 | orchestrator | 2026-03-17 01:06:19.257571 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-17 01:06:19.257576 | orchestrator | Tuesday 17 March 2026 01:05:31 +0000 (0:00:28.245) 0:03:24.049 ********* 2026-03-17 01:06:19.257581 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:06:19.257587 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:06:19.257592 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:06:19.257598 | orchestrator | 2026-03-17 01:06:19.257603 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:06:19.257608 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-17 01:06:19.257615 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-17 01:06:19.257620 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-17 01:06:19.257626 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-17 01:06:19.257631 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-17 01:06:19.257637 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-17 01:06:19.257642 | orchestrator | 2026-03-17 01:06:19.257647 | orchestrator | 2026-03-17 01:06:19.257652 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:06:19.257663 | orchestrator | Tuesday 17 March 2026 01:06:16 +0000 (0:00:45.520) 0:04:09.569 ********* 2026-03-17 01:06:19.257668 | orchestrator | =============================================================================== 2026-03-17 01:06:19.257674 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 45.52s 2026-03-17 01:06:19.257679 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 38.74s 2026-03-17 01:06:19.257684 | orchestrator | neutron : Restart neutron-server container ----------------------------- 28.25s 2026-03-17 01:06:19.257690 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 8.14s 2026-03-17 01:06:19.257695 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.86s 2026-03-17 01:06:19.257701 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.95s 2026-03-17 01:06:19.257706 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.18s 2026-03-17 01:06:19.257711 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.15s 2026-03-17 01:06:19.257720 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.92s 2026-03-17 01:06:19.257726 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.77s 2026-03-17 01:06:19.257731 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.72s 2026-03-17 01:06:19.257736 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.55s 2026-03-17 01:06:19.257741 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.41s 2026-03-17 01:06:19.257746 | orchestrator | Setting sysctl values --------------------------------------------------- 3.26s 2026-03-17 01:06:19.257752 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.22s 2026-03-17 01:06:19.257758 | orchestrator | neutron : Copying over extra ml2 plugins -------------------------------- 3.07s 2026-03-17 01:06:19.257763 | orchestrator | neutron : Check neutron containers -------------------------------------- 2.99s 2026-03-17 01:06:19.257769 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 2.96s 2026-03-17 01:06:19.257774 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 2.90s 2026-03-17 01:06:19.257779 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 2.75s 2026-03-17 01:06:19.257784 | orchestrator | 2026-03-17 01:06:19 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:06:19.257790 | orchestrator | 2026-03-17 01:06:19 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:06:19.257796 | orchestrator | 2026-03-17 01:06:19 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:22.276682 | orchestrator | 2026-03-17 01:06:22 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:06:22.277883 | orchestrator | 2026-03-17 01:06:22 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:06:22.278356 | orchestrator | 2026-03-17 01:06:22 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:06:22.279983 | orchestrator | 2026-03-17 01:06:22 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:06:22.280027 | orchestrator | 2026-03-17 01:06:22 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:25.302873 | orchestrator | 2026-03-17 01:06:25 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:06:25.303083 | orchestrator | 2026-03-17 01:06:25 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:06:25.303965 | orchestrator | 2026-03-17 01:06:25 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:06:25.304704 | orchestrator | 2026-03-17 01:06:25 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:06:25.304750 | orchestrator | 2026-03-17 01:06:25 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:28.326257 | orchestrator | 2026-03-17 01:06:28 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:06:28.326391 | orchestrator | 2026-03-17 01:06:28 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:06:28.327335 | orchestrator | 2026-03-17 01:06:28 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:06:28.329676 | orchestrator | 2026-03-17 01:06:28 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:06:28.330054 | orchestrator | 2026-03-17 01:06:28 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:31.360801 | orchestrator | 2026-03-17 01:06:31 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:06:31.361403 | orchestrator | 2026-03-17 01:06:31 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:06:31.362573 | orchestrator | 2026-03-17 01:06:31 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:06:31.363097 | orchestrator | 2026-03-17 01:06:31 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:06:31.363232 | orchestrator | 2026-03-17 01:06:31 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:34.387799 | orchestrator | 2026-03-17 01:06:34 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:06:34.387856 | orchestrator | 2026-03-17 01:06:34 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:06:34.389636 | orchestrator | 2026-03-17 01:06:34 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:06:34.390218 | orchestrator | 2026-03-17 01:06:34 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:06:34.390262 | orchestrator | 2026-03-17 01:06:34 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:37.423868 | orchestrator | 2026-03-17 01:06:37 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:06:37.425606 | orchestrator | 2026-03-17 01:06:37 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:06:37.427535 | orchestrator | 2026-03-17 01:06:37 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:06:37.429271 | orchestrator | 2026-03-17 01:06:37 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:06:37.429325 | orchestrator | 2026-03-17 01:06:37 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:40.456421 | orchestrator | 2026-03-17 01:06:40 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:06:40.457793 | orchestrator | 2026-03-17 01:06:40 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:06:40.462757 | orchestrator | 2026-03-17 01:06:40 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:06:40.463788 | orchestrator | 2026-03-17 01:06:40 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:06:40.463822 | orchestrator | 2026-03-17 01:06:40 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:43.493799 | orchestrator | 2026-03-17 01:06:43 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:06:43.493855 | orchestrator | 2026-03-17 01:06:43 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:06:43.494292 | orchestrator | 2026-03-17 01:06:43 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:06:43.495265 | orchestrator | 2026-03-17 01:06:43 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:06:43.495731 | orchestrator | 2026-03-17 01:06:43 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:46.540701 | orchestrator | 2026-03-17 01:06:46 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:06:46.540751 | orchestrator | 2026-03-17 01:06:46 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:06:46.540756 | orchestrator | 2026-03-17 01:06:46 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:06:46.540760 | orchestrator | 2026-03-17 01:06:46 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:06:46.540763 | orchestrator | 2026-03-17 01:06:46 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:49.551931 | orchestrator | 2026-03-17 01:06:49 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:06:49.552333 | orchestrator | 2026-03-17 01:06:49 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:06:49.552944 | orchestrator | 2026-03-17 01:06:49 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:06:49.554134 | orchestrator | 2026-03-17 01:06:49 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:06:49.554158 | orchestrator | 2026-03-17 01:06:49 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:52.578575 | orchestrator | 2026-03-17 01:06:52 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:06:52.579132 | orchestrator | 2026-03-17 01:06:52 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:06:52.579722 | orchestrator | 2026-03-17 01:06:52 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:06:52.580557 | orchestrator | 2026-03-17 01:06:52 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:06:52.580600 | orchestrator | 2026-03-17 01:06:52 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:55.633504 | orchestrator | 2026-03-17 01:06:55 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:06:55.636235 | orchestrator | 2026-03-17 01:06:55 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:06:55.636703 | orchestrator | 2026-03-17 01:06:55 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:06:55.637228 | orchestrator | 2026-03-17 01:06:55 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:06:55.637251 | orchestrator | 2026-03-17 01:06:55 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:58.663675 | orchestrator | 2026-03-17 01:06:58 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:06:58.664862 | orchestrator | 2026-03-17 01:06:58 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:06:58.665537 | orchestrator | 2026-03-17 01:06:58 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:06:58.666293 | orchestrator | 2026-03-17 01:06:58 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:06:58.666331 | orchestrator | 2026-03-17 01:06:58 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:01.691634 | orchestrator | 2026-03-17 01:07:01 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:07:01.692629 | orchestrator | 2026-03-17 01:07:01 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:07:01.694879 | orchestrator | 2026-03-17 01:07:01 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:07:01.695150 | orchestrator | 2026-03-17 01:07:01 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:07:01.695252 | orchestrator | 2026-03-17 01:07:01 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:04.726154 | orchestrator | 2026-03-17 01:07:04 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:07:04.726341 | orchestrator | 2026-03-17 01:07:04 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:07:04.727185 | orchestrator | 2026-03-17 01:07:04 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:07:04.728485 | orchestrator | 2026-03-17 01:07:04 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:07:04.728533 | orchestrator | 2026-03-17 01:07:04 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:07.763865 | orchestrator | 2026-03-17 01:07:07 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:07:07.765148 | orchestrator | 2026-03-17 01:07:07 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:07:07.766603 | orchestrator | 2026-03-17 01:07:07 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:07:07.768292 | orchestrator | 2026-03-17 01:07:07 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:07:07.768349 | orchestrator | 2026-03-17 01:07:07 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:10.796253 | orchestrator | 2026-03-17 01:07:10 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:07:10.796533 | orchestrator | 2026-03-17 01:07:10 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:07:10.797345 | orchestrator | 2026-03-17 01:07:10 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:07:10.797978 | orchestrator | 2026-03-17 01:07:10 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:07:10.798001 | orchestrator | 2026-03-17 01:07:10 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:13.827873 | orchestrator | 2026-03-17 01:07:13 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:07:13.828846 | orchestrator | 2026-03-17 01:07:13 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:07:13.831349 | orchestrator | 2026-03-17 01:07:13 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:07:13.833170 | orchestrator | 2026-03-17 01:07:13 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:07:13.833207 | orchestrator | 2026-03-17 01:07:13 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:16.862112 | orchestrator | 2026-03-17 01:07:16 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:07:16.863146 | orchestrator | 2026-03-17 01:07:16 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:07:16.865267 | orchestrator | 2026-03-17 01:07:16 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:07:16.865784 | orchestrator | 2026-03-17 01:07:16 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:07:16.865824 | orchestrator | 2026-03-17 01:07:16 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:19.898225 | orchestrator | 2026-03-17 01:07:19 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:07:19.899113 | orchestrator | 2026-03-17 01:07:19 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:07:19.899953 | orchestrator | 2026-03-17 01:07:19 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:07:19.900829 | orchestrator | 2026-03-17 01:07:19 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:07:19.900850 | orchestrator | 2026-03-17 01:07:19 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:22.941065 | orchestrator | 2026-03-17 01:07:22 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:07:22.943099 | orchestrator | 2026-03-17 01:07:22 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:07:22.944854 | orchestrator | 2026-03-17 01:07:22 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:07:22.946659 | orchestrator | 2026-03-17 01:07:22 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:07:22.946705 | orchestrator | 2026-03-17 01:07:22 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:25.982097 | orchestrator | 2026-03-17 01:07:25 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:07:25.983152 | orchestrator | 2026-03-17 01:07:25 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:07:25.984306 | orchestrator | 2026-03-17 01:07:25 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:07:25.985369 | orchestrator | 2026-03-17 01:07:25 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:07:25.985402 | orchestrator | 2026-03-17 01:07:25 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:29.029339 | orchestrator | 2026-03-17 01:07:29 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:07:29.032538 | orchestrator | 2026-03-17 01:07:29 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:07:29.034771 | orchestrator | 2026-03-17 01:07:29 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:07:29.038284 | orchestrator | 2026-03-17 01:07:29 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:07:29.038341 | orchestrator | 2026-03-17 01:07:29 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:32.113211 | orchestrator | 2026-03-17 01:07:32 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:07:32.115643 | orchestrator | 2026-03-17 01:07:32 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:07:32.116577 | orchestrator | 2026-03-17 01:07:32 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:07:32.118581 | orchestrator | 2026-03-17 01:07:32 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:07:32.118626 | orchestrator | 2026-03-17 01:07:32 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:35.162431 | orchestrator | 2026-03-17 01:07:35 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:07:35.163704 | orchestrator | 2026-03-17 01:07:35 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:07:35.165960 | orchestrator | 2026-03-17 01:07:35 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:07:35.167596 | orchestrator | 2026-03-17 01:07:35 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:07:35.167719 | orchestrator | 2026-03-17 01:07:35 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:38.211611 | orchestrator | 2026-03-17 01:07:38 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:07:38.213715 | orchestrator | 2026-03-17 01:07:38 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:07:38.217340 | orchestrator | 2026-03-17 01:07:38 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:07:38.219491 | orchestrator | 2026-03-17 01:07:38 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:07:38.219871 | orchestrator | 2026-03-17 01:07:38 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:41.248470 | orchestrator | 2026-03-17 01:07:41 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:07:41.248518 | orchestrator | 2026-03-17 01:07:41 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:07:41.249435 | orchestrator | 2026-03-17 01:07:41 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:07:41.250184 | orchestrator | 2026-03-17 01:07:41 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:07:41.250199 | orchestrator | 2026-03-17 01:07:41 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:44.309558 | orchestrator | 2026-03-17 01:07:44 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:07:44.310306 | orchestrator | 2026-03-17 01:07:44 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:07:44.311295 | orchestrator | 2026-03-17 01:07:44 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:07:44.312784 | orchestrator | 2026-03-17 01:07:44 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:07:44.312815 | orchestrator | 2026-03-17 01:07:44 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:47.351135 | orchestrator | 2026-03-17 01:07:47 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:07:47.352139 | orchestrator | 2026-03-17 01:07:47 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:07:47.353009 | orchestrator | 2026-03-17 01:07:47 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:07:47.353970 | orchestrator | 2026-03-17 01:07:47 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:07:47.354081 | orchestrator | 2026-03-17 01:07:47 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:50.377696 | orchestrator | 2026-03-17 01:07:50 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:07:50.378792 | orchestrator | 2026-03-17 01:07:50 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:07:50.379619 | orchestrator | 2026-03-17 01:07:50 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:07:50.380473 | orchestrator | 2026-03-17 01:07:50 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:07:50.380493 | orchestrator | 2026-03-17 01:07:50 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:53.421228 | orchestrator | 2026-03-17 01:07:53 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:07:53.422977 | orchestrator | 2026-03-17 01:07:53 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:07:53.424046 | orchestrator | 2026-03-17 01:07:53 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:07:53.425264 | orchestrator | 2026-03-17 01:07:53 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state STARTED 2026-03-17 01:07:53.425406 | orchestrator | 2026-03-17 01:07:53 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:56.450717 | orchestrator | 2026-03-17 01:07:56 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:07:56.451098 | orchestrator | 2026-03-17 01:07:56 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:07:56.452852 | orchestrator | 2026-03-17 01:07:56 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:07:56.456092 | orchestrator | 2026-03-17 01:07:56 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:07:56.458958 | orchestrator | 2026-03-17 01:07:56 | INFO  | Task 20d7e656-749b-483d-9eb7-e977064ceaf9 is in state SUCCESS 2026-03-17 01:07:56.461205 | orchestrator | 2026-03-17 01:07:56.461250 | orchestrator | 2026-03-17 01:07:56.461256 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:07:56.461262 | orchestrator | 2026-03-17 01:07:56.461267 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:07:56.461273 | orchestrator | Tuesday 17 March 2026 01:04:50 +0000 (0:00:00.285) 0:00:00.285 ********* 2026-03-17 01:07:56.461278 | orchestrator | ok: [testbed-manager] 2026-03-17 01:07:56.461283 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:07:56.461288 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:07:56.461293 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:07:56.461296 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:07:56.461299 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:07:56.461303 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:07:56.461306 | orchestrator | 2026-03-17 01:07:56.461309 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:07:56.461312 | orchestrator | Tuesday 17 March 2026 01:04:51 +0000 (0:00:00.801) 0:00:01.087 ********* 2026-03-17 01:07:56.461316 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-17 01:07:56.461319 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-17 01:07:56.461345 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-17 01:07:56.461349 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-17 01:07:56.461352 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-17 01:07:56.461355 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-17 01:07:56.461358 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-17 01:07:56.461371 | orchestrator | 2026-03-17 01:07:56.461374 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-17 01:07:56.461378 | orchestrator | 2026-03-17 01:07:56.461393 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-17 01:07:56.461396 | orchestrator | Tuesday 17 March 2026 01:04:52 +0000 (0:00:00.715) 0:00:01.803 ********* 2026-03-17 01:07:56.461400 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:07:56.461405 | orchestrator | 2026-03-17 01:07:56.461408 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-17 01:07:56.461411 | orchestrator | Tuesday 17 March 2026 01:04:53 +0000 (0:00:01.446) 0:00:03.249 ********* 2026-03-17 01:07:56.461416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:07:56.461454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:07:56.461460 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-17 01:07:56.461604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:07:56.461616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.461620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.461623 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:07:56.461627 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:07:56.461637 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:07:56.461640 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:07:56.461644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.461647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.461654 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.461658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.461662 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.461668 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.461678 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-17 01:07:56.461682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.461688 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.461691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.461694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.461700 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.461703 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.461708 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.461712 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.461715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.461721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.461725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.461728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.461734 | orchestrator | 2026-03-17 01:07:56.461737 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-17 01:07:56.461741 | orchestrator | Tuesday 17 March 2026 01:04:56 +0000 (0:00:02.989) 0:00:06.239 ********* 2026-03-17 01:07:56.461744 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:07:56.461747 | orchestrator | 2026-03-17 01:07:56.461750 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-17 01:07:56.461754 | orchestrator | Tuesday 17 March 2026 01:04:57 +0000 (0:00:01.335) 0:00:07.574 ********* 2026-03-17 01:07:56.461758 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-17 01:07:56.461802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:07:56.461806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:07:56.461812 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:07:56.461833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:07:56.461844 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:07:56.461849 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:07:56.461854 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:07:56.461862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.461867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.461872 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.461881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.461887 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.461896 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.461901 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.461908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.461914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.461919 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.461925 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.461932 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-17 01:07:56.461939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.461945 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.461968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.461975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.461980 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.461990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.462161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.462174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.462180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.462185 | orchestrator | 2026-03-17 01:07:56.462190 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-17 01:07:56.462196 | orchestrator | Tuesday 17 March 2026 01:05:03 +0000 (0:00:05.979) 0:00:13.553 ********* 2026-03-17 01:07:56.462202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:07:56.462241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:07:56.462250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:07:56.462256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:07:56.462271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:07:56.462276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:07:56.462282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:07:56.462287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:07:56.462294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:07:56.462300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:07:56.462305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:07:56.462428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:07:56.462444 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-17 01:07:56.462450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:07:56.462455 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:07:56.462464 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:07:56.462469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:07:56.462475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:07:56.462488 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-17 01:07:56.462495 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:07:56.462501 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:07:56.462506 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:07:56.462512 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:07:56.462625 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:07:56.462632 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:07:56.462636 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:07:56.462642 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 01:07:56.462645 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:07:56.462648 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:07:56.462655 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:07:56.462668 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:07:56.462671 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 01:07:56.462674 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:07:56.462678 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:07:56.462681 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 01:07:56.462684 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:07:56.462687 | orchestrator | 2026-03-17 01:07:56.462691 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-17 01:07:56.462694 | orchestrator | Tuesday 17 March 2026 01:05:06 +0000 (0:00:02.691) 0:00:16.245 ********* 2026-03-17 01:07:56.462699 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-17 01:07:56.462705 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:07:56.462708 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:07:56.462723 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-17 01:07:56.462726 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:07:56.462730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:07:56.462735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:07:56.462739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:07:56.462744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:07:56.462748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:07:56.462845 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:07:56.462851 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:07:56.462854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:07:56.462857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:07:56.462861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:07:56.462864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:07:56.462869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:07:56.462876 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:07:56.462879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:07:56.462882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:07:56.462894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:07:56.462897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:07:56.462901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:07:56.462904 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:07:56.462907 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:07:56.462912 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:07:56.462918 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 01:07:56.462921 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:07:56.462924 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:07:56.462928 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:07:56.462938 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 01:07:56.462976 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:07:56.462996 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:07:56.463002 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:07:56.463026 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 01:07:56.463037 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:07:56.463041 | orchestrator | 2026-03-17 01:07:56.463046 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-17 01:07:56.463051 | orchestrator | Tuesday 17 March 2026 01:05:09 +0000 (0:00:02.911) 0:00:19.157 ********* 2026-03-17 01:07:56.463058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:07:56.463063 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-17 01:07:56.463154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:07:56.463164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:07:56.463170 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:07:56.463175 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:07:56.463184 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:07:56.463189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.463193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.463196 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:07:56.463235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.463240 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.463244 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.463250 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.463253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.463258 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.463262 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.463265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.463278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.463282 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-17 01:07:56.463288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.463293 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.463297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.463300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.463315 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.463321 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.463326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.463334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.463339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.463344 | orchestrator | 2026-03-17 01:07:56.463350 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-17 01:07:56.463354 | orchestrator | Tuesday 17 March 2026 01:05:16 +0000 (0:00:06.772) 0:00:25.929 ********* 2026-03-17 01:07:56.463359 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 01:07:56.463375 | orchestrator | 2026-03-17 01:07:56.463380 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-17 01:07:56.463387 | orchestrator | Tuesday 17 March 2026 01:05:17 +0000 (0:00:00.946) 0:00:26.876 ********* 2026-03-17 01:07:56.463392 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1078215, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.084923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463397 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1078215, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.084923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463419 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1078215, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.084923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463425 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1078215, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.084923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463434 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1078225, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0893376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463439 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1078215, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.084923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:07:56.463448 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1078225, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0893376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463453 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1078215, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.084923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463459 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1078225, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0893376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463473 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1078215, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.084923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463478 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1078225, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0893376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463487 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1078213, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0832338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463492 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1078213, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0832338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463500 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1078213, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0832338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463506 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1078225, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0893376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463511 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1078221, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0878003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463529 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1078225, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0893376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463539 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1078213, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0832338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463544 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1078225, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0893376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:07:56.463550 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1078221, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0878003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463558 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1078221, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0878003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463563 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1078211, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.082538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463568 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1078213, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0832338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463589 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1078213, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0832338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463596 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1078211, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.082538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463599 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1078221, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0878003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463602 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1078221, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0878003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463606 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1078216, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.084923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463612 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1078216, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.084923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463616 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1078211, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.082538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463628 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1078220, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0874898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463634 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1078220, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0874898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463637 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1078217, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.086088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463640 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1078221, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0878003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463644 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1078213, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0832338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:07:56.463649 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1078211, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.082538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463652 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1078217, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.086088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463655 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1078214, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0839229, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463670 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1078211, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.082538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463673 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1078211, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.082538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463677 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1078216, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.084923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463680 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1078216, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.084923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463685 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1078216, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.084923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463688 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1078214, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0839229, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463694 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1078221, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0878003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:07:56.463706 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078224, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.089052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463710 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1078220, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0874898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463713 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1078216, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.084923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463717 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1078220, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0874898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463722 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1078220, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0874898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463725 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078224, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.089052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463730 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1078217, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.086088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463743 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1078220, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0874898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463747 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1078217, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.086088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463750 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078209, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0817807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463753 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1078217, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.086088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463758 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1078217, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.086088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463762 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1078214, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0839229, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463767 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078209, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0817807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463779 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1078214, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0839229, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463783 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1078214, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0839229, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463786 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1078214, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0839229, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463789 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078224, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.089052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463794 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1078298, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.150924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463798 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078209, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0817807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463804 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1078211, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.082538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:07:56.463816 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078224, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.089052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463820 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1078298, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.150924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463823 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1078298, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.150924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463826 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078224, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.089052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463831 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078224, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.089052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463837 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1078223, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.088741, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463840 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078209, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0817807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463852 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1078223, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.088741, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463856 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078209, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0817807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463859 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1078223, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.088741, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463863 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078212, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0830228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463868 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1078298, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.150924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463876 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078209, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0817807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463880 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078212, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0830228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463886 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1078216, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.084923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:07:56.463890 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078212, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0830228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463894 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1078210, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0821445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463897 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1078210, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0821445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463903 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1078223, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.088741, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463910 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1078298, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.150924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463914 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1078298, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.150924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463921 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1078210, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0821445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463925 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1078220, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0874898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:07:56.463928 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1078219, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0871294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463932 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1078219, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0871294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463939 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078212, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0830228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463943 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1078223, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.088741, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463947 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1078223, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.088741, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463956 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1078218, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.086497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463962 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1078254, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.1459239, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463967 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1078219, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0871294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463973 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:07:56.463978 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1078210, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0821445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.463994 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1078218, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.086497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.464000 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1078219, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0871294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.464006 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078212, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0830228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.464015 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078212, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0830228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.464021 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1078218, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.086497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.464027 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1078254, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.1459239, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.464031 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:07:56.464035 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1078210, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0821445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.464042 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1078218, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.086497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.464046 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1078217, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.086088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:07:56.464050 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1078210, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0821445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.464056 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1078254, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.1459239, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.464060 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:07:56.464064 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1078219, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0871294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.464068 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1078254, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.1459239, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.464074 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:07:56.464077 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1078219, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0871294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.464083 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1078218, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.086497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.464086 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1078218, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.086497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.464090 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1078214, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0839229, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:07:56.464096 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1078254, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.1459239, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.464099 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:07:56.464103 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1078254, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.1459239, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-17 01:07:56.464107 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:07:56.464111 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078224, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.089052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:07:56.464117 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078209, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0817807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:07:56.464123 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1078298, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.150924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:07:56.464127 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1078223, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.088741, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:07:56.464130 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1078212, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0830228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:07:56.464136 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1078210, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0821445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:07:56.464140 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1078219, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0871294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:07:56.464147 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1078218, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.086497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:07:56.464150 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1078254, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.1459239, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 01:07:56.464154 | orchestrator | 2026-03-17 01:07:56.464158 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-17 01:07:56.464163 | orchestrator | Tuesday 17 March 2026 01:05:41 +0000 (0:00:23.746) 0:00:50.622 ********* 2026-03-17 01:07:56.464168 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 01:07:56.464173 | orchestrator | 2026-03-17 01:07:56.464181 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-17 01:07:56.464187 | orchestrator | Tuesday 17 March 2026 01:05:41 +0000 (0:00:00.645) 0:00:51.268 ********* 2026-03-17 01:07:56.464191 | orchestrator | [WARNING]: Skipped 2026-03-17 01:07:56.464197 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:07:56.464202 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-17 01:07:56.464208 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:07:56.464213 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-17 01:07:56.464219 | orchestrator | [WARNING]: Skipped 2026-03-17 01:07:56.464224 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:07:56.464230 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-17 01:07:56.464234 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:07:56.464239 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-17 01:07:56.464242 | orchestrator | [WARNING]: Skipped 2026-03-17 01:07:56.464246 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:07:56.464251 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-17 01:07:56.464256 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:07:56.464261 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-17 01:07:56.464266 | orchestrator | [WARNING]: Skipped 2026-03-17 01:07:56.464279 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:07:56.464289 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-17 01:07:56.464295 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:07:56.464300 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-17 01:07:56.464305 | orchestrator | [WARNING]: Skipped 2026-03-17 01:07:56.464310 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:07:56.464313 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-17 01:07:56.464316 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:07:56.464328 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-17 01:07:56.464331 | orchestrator | [WARNING]: Skipped 2026-03-17 01:07:56.464334 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:07:56.464337 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-17 01:07:56.464340 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:07:56.464344 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-17 01:07:56.464347 | orchestrator | [WARNING]: Skipped 2026-03-17 01:07:56.464350 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:07:56.464353 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-17 01:07:56.464356 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:07:56.464359 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-17 01:07:56.464391 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 01:07:56.464395 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 01:07:56.464398 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-17 01:07:56.464401 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-17 01:07:56.464404 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-17 01:07:56.464407 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-17 01:07:56.464410 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-17 01:07:56.464413 | orchestrator | 2026-03-17 01:07:56.464417 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-17 01:07:56.464420 | orchestrator | Tuesday 17 March 2026 01:05:43 +0000 (0:00:01.614) 0:00:52.882 ********* 2026-03-17 01:07:56.464423 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-17 01:07:56.464426 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:07:56.464429 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-17 01:07:56.464432 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:07:56.464435 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-17 01:07:56.464438 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:07:56.464441 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-17 01:07:56.464445 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:07:56.464448 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-17 01:07:56.464451 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:07:56.464454 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-17 01:07:56.464457 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:07:56.464460 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-17 01:07:56.464463 | orchestrator | 2026-03-17 01:07:56.464466 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-17 01:07:56.464469 | orchestrator | Tuesday 17 March 2026 01:05:56 +0000 (0:00:13.406) 0:01:06.289 ********* 2026-03-17 01:07:56.464472 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-17 01:07:56.464478 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:07:56.464481 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-17 01:07:56.464485 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:07:56.464488 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-17 01:07:56.464491 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:07:56.464494 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-17 01:07:56.464504 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:07:56.464509 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-17 01:07:56.464515 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:07:56.464521 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-17 01:07:56.464526 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:07:56.464530 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-17 01:07:56.464533 | orchestrator | 2026-03-17 01:07:56.464536 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-17 01:07:56.464540 | orchestrator | Tuesday 17 March 2026 01:05:59 +0000 (0:00:02.765) 0:01:09.055 ********* 2026-03-17 01:07:56.464543 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-17 01:07:56.464546 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:07:56.464549 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-17 01:07:56.464552 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:07:56.464555 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-17 01:07:56.464559 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:07:56.464562 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-17 01:07:56.464567 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-17 01:07:56.464570 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:07:56.464573 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-17 01:07:56.464577 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:07:56.464580 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-17 01:07:56.464586 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:07:56.464591 | orchestrator | 2026-03-17 01:07:56.464596 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-17 01:07:56.464601 | orchestrator | Tuesday 17 March 2026 01:06:01 +0000 (0:00:02.365) 0:01:11.421 ********* 2026-03-17 01:07:56.464606 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 01:07:56.464610 | orchestrator | 2026-03-17 01:07:56.464615 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-17 01:07:56.464620 | orchestrator | Tuesday 17 March 2026 01:06:02 +0000 (0:00:00.910) 0:01:12.331 ********* 2026-03-17 01:07:56.464625 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:07:56.464629 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:07:56.464634 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:07:56.464639 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:07:56.464644 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:07:56.464649 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:07:56.464655 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:07:56.464660 | orchestrator | 2026-03-17 01:07:56.464665 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-17 01:07:56.464671 | orchestrator | Tuesday 17 March 2026 01:06:03 +0000 (0:00:00.978) 0:01:13.310 ********* 2026-03-17 01:07:56.464676 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:07:56.464681 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:07:56.464686 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:07:56.464691 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:07:56.464696 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:07:56.464705 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:07:56.464710 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:07:56.464716 | orchestrator | 2026-03-17 01:07:56.464721 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-17 01:07:56.464726 | orchestrator | Tuesday 17 March 2026 01:06:06 +0000 (0:00:02.470) 0:01:15.780 ********* 2026-03-17 01:07:56.464732 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-17 01:07:56.464737 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:07:56.464742 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-17 01:07:56.464748 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:07:56.464752 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-17 01:07:56.464755 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:07:56.464758 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-17 01:07:56.464761 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:07:56.464766 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-17 01:07:56.464769 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:07:56.464772 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-17 01:07:56.464776 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:07:56.464779 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-17 01:07:56.464782 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:07:56.464785 | orchestrator | 2026-03-17 01:07:56.464788 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-17 01:07:56.464791 | orchestrator | Tuesday 17 March 2026 01:06:07 +0000 (0:00:01.640) 0:01:17.421 ********* 2026-03-17 01:07:56.464794 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-17 01:07:56.464797 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:07:56.464800 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-17 01:07:56.464803 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:07:56.464806 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-17 01:07:56.464809 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:07:56.464812 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-17 01:07:56.464815 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:07:56.464819 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-17 01:07:56.464822 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:07:56.464825 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-17 01:07:56.464828 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:07:56.464831 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-17 01:07:56.464834 | orchestrator | 2026-03-17 01:07:56.464837 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-17 01:07:56.464843 | orchestrator | Tuesday 17 March 2026 01:06:09 +0000 (0:00:01.775) 0:01:19.196 ********* 2026-03-17 01:07:56.464846 | orchestrator | [WARNING]: Skipped 2026-03-17 01:07:56.464849 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-17 01:07:56.464853 | orchestrator | due to this access issue: 2026-03-17 01:07:56.464856 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-17 01:07:56.464859 | orchestrator | not a directory 2026-03-17 01:07:56.464864 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 01:07:56.464868 | orchestrator | 2026-03-17 01:07:56.464871 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-17 01:07:56.464874 | orchestrator | Tuesday 17 March 2026 01:06:10 +0000 (0:00:00.985) 0:01:20.182 ********* 2026-03-17 01:07:56.464877 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:07:56.464880 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:07:56.464883 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:07:56.464886 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:07:56.464889 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:07:56.464892 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:07:56.464895 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:07:56.464898 | orchestrator | 2026-03-17 01:07:56.464901 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-17 01:07:56.464904 | orchestrator | Tuesday 17 March 2026 01:06:11 +0000 (0:00:00.781) 0:01:20.963 ********* 2026-03-17 01:07:56.464907 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:07:56.464910 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:07:56.464913 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:07:56.464917 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:07:56.464920 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:07:56.464923 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:07:56.464926 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:07:56.464929 | orchestrator | 2026-03-17 01:07:56.464932 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-03-17 01:07:56.464935 | orchestrator | Tuesday 17 March 2026 01:06:12 +0000 (0:00:00.716) 0:01:21.680 ********* 2026-03-17 01:07:56.464939 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-17 01:07:56.464945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:07:56.464948 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:07:56.464952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:07:56.464960 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:07:56.464963 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.464967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:07:56.464970 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:07:56.464973 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:07:56.464979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.464982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.464986 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.464994 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-17 01:07:56.464997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.465001 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.465005 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.465009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.465012 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.465018 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.465024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.465027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.465030 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.465033 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.465039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.465042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.465047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:07:56.465053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.465056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.465059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:07:56.465062 | orchestrator | 2026-03-17 01:07:56.465066 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-17 01:07:56.465069 | orchestrator | Tuesday 17 March 2026 01:06:16 +0000 (0:00:04.306) 0:01:25.986 ********* 2026-03-17 01:07:56.465072 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-17 01:07:56.465075 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:07:56.465078 | orchestrator | 2026-03-17 01:07:56.465081 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-17 01:07:56.465084 | orchestrator | Tuesday 17 March 2026 01:06:18 +0000 (0:00:01.818) 0:01:27.804 ********* 2026-03-17 01:07:56.465088 | orchestrator | 2026-03-17 01:07:56.465091 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-17 01:07:56.465094 | orchestrator | Tuesday 17 March 2026 01:06:18 +0000 (0:00:00.066) 0:01:27.871 ********* 2026-03-17 01:07:56.465097 | orchestrator | 2026-03-17 01:07:56.465100 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-17 01:07:56.465103 | orchestrator | Tuesday 17 March 2026 01:06:18 +0000 (0:00:00.064) 0:01:27.935 ********* 2026-03-17 01:07:56.465106 | orchestrator | 2026-03-17 01:07:56.465109 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-17 01:07:56.465112 | orchestrator | Tuesday 17 March 2026 01:06:18 +0000 (0:00:00.075) 0:01:28.011 ********* 2026-03-17 01:07:56.465115 | orchestrator | 2026-03-17 01:07:56.465118 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-17 01:07:56.465121 | orchestrator | Tuesday 17 March 2026 01:06:18 +0000 (0:00:00.171) 0:01:28.182 ********* 2026-03-17 01:07:56.465124 | orchestrator | 2026-03-17 01:07:56.465131 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-17 01:07:56.465134 | orchestrator | Tuesday 17 March 2026 01:06:18 +0000 (0:00:00.059) 0:01:28.241 ********* 2026-03-17 01:07:56.465137 | orchestrator | 2026-03-17 01:07:56.465142 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-17 01:07:56.465146 | orchestrator | Tuesday 17 March 2026 01:06:18 +0000 (0:00:00.061) 0:01:28.303 ********* 2026-03-17 01:07:56.465149 | orchestrator | 2026-03-17 01:07:56.465152 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-17 01:07:56.465155 | orchestrator | Tuesday 17 March 2026 01:06:18 +0000 (0:00:00.081) 0:01:28.385 ********* 2026-03-17 01:07:56.465158 | orchestrator | changed: [testbed-manager] 2026-03-17 01:07:56.465162 | orchestrator | 2026-03-17 01:07:56.465165 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-17 01:07:56.465168 | orchestrator | Tuesday 17 March 2026 01:06:37 +0000 (0:00:19.013) 0:01:47.399 ********* 2026-03-17 01:07:56.465171 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:07:56.465174 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:07:56.465177 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:07:56.465180 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:07:56.465183 | orchestrator | changed: [testbed-manager] 2026-03-17 01:07:56.465186 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:07:56.465189 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:07:56.465192 | orchestrator | 2026-03-17 01:07:56.465195 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-17 01:07:56.465198 | orchestrator | Tuesday 17 March 2026 01:06:52 +0000 (0:00:14.453) 0:02:01.852 ********* 2026-03-17 01:07:56.465202 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:07:56.465205 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:07:56.465208 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:07:56.465211 | orchestrator | 2026-03-17 01:07:56.465214 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-17 01:07:56.465217 | orchestrator | Tuesday 17 March 2026 01:06:57 +0000 (0:00:05.341) 0:02:07.193 ********* 2026-03-17 01:07:56.465220 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:07:56.465223 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:07:56.465226 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:07:56.465229 | orchestrator | 2026-03-17 01:07:56.465232 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-17 01:07:56.465236 | orchestrator | Tuesday 17 March 2026 01:07:08 +0000 (0:00:10.787) 0:02:17.981 ********* 2026-03-17 01:07:56.465239 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:07:56.465242 | orchestrator | changed: [testbed-manager] 2026-03-17 01:07:56.465245 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:07:56.465248 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:07:56.465251 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:07:56.465254 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:07:56.465259 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:07:56.465262 | orchestrator | 2026-03-17 01:07:56.465266 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-17 01:07:56.465269 | orchestrator | Tuesday 17 March 2026 01:07:22 +0000 (0:00:13.935) 0:02:31.917 ********* 2026-03-17 01:07:56.465272 | orchestrator | changed: [testbed-manager] 2026-03-17 01:07:56.465275 | orchestrator | 2026-03-17 01:07:56.465278 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-17 01:07:56.465281 | orchestrator | Tuesday 17 March 2026 01:07:29 +0000 (0:00:06.930) 0:02:38.847 ********* 2026-03-17 01:07:56.465285 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:07:56.465288 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:07:56.465291 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:07:56.465294 | orchestrator | 2026-03-17 01:07:56.465297 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-17 01:07:56.465300 | orchestrator | Tuesday 17 March 2026 01:07:38 +0000 (0:00:09.260) 0:02:48.107 ********* 2026-03-17 01:07:56.465305 | orchestrator | changed: [testbed-manager] 2026-03-17 01:07:56.465308 | orchestrator | 2026-03-17 01:07:56.465312 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-17 01:07:56.465315 | orchestrator | Tuesday 17 March 2026 01:07:42 +0000 (0:00:04.125) 0:02:52.233 ********* 2026-03-17 01:07:56.465318 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:07:56.465321 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:07:56.465324 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:07:56.465327 | orchestrator | 2026-03-17 01:07:56.465330 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:07:56.465334 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-17 01:07:56.465337 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-17 01:07:56.465340 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-17 01:07:56.465344 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-17 01:07:56.465347 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-17 01:07:56.465350 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-17 01:07:56.465353 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-17 01:07:56.465356 | orchestrator | 2026-03-17 01:07:56.465359 | orchestrator | 2026-03-17 01:07:56.465372 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:07:56.465375 | orchestrator | Tuesday 17 March 2026 01:07:53 +0000 (0:00:10.530) 0:03:02.763 ********* 2026-03-17 01:07:56.465380 | orchestrator | =============================================================================== 2026-03-17 01:07:56.465383 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 23.75s 2026-03-17 01:07:56.465386 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 19.01s 2026-03-17 01:07:56.465389 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.45s 2026-03-17 01:07:56.465392 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.94s 2026-03-17 01:07:56.465395 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 13.41s 2026-03-17 01:07:56.465398 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.79s 2026-03-17 01:07:56.465401 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.53s 2026-03-17 01:07:56.465404 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 9.26s 2026-03-17 01:07:56.465407 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 6.93s 2026-03-17 01:07:56.465410 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.77s 2026-03-17 01:07:56.465413 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.98s 2026-03-17 01:07:56.465416 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.34s 2026-03-17 01:07:56.465419 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.31s 2026-03-17 01:07:56.465422 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.13s 2026-03-17 01:07:56.465426 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.99s 2026-03-17 01:07:56.465429 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.91s 2026-03-17 01:07:56.465435 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.77s 2026-03-17 01:07:56.465438 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 2.69s 2026-03-17 01:07:56.465441 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.47s 2026-03-17 01:07:56.465444 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.37s 2026-03-17 01:07:56.465449 | orchestrator | 2026-03-17 01:07:56 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:59.483766 | orchestrator | 2026-03-17 01:07:59 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:07:59.484742 | orchestrator | 2026-03-17 01:07:59 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:07:59.486428 | orchestrator | 2026-03-17 01:07:59 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:07:59.487793 | orchestrator | 2026-03-17 01:07:59 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:07:59.487916 | orchestrator | 2026-03-17 01:07:59 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:02.520723 | orchestrator | 2026-03-17 01:08:02 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:08:02.522311 | orchestrator | 2026-03-17 01:08:02 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:08:02.523952 | orchestrator | 2026-03-17 01:08:02 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:08:02.525335 | orchestrator | 2026-03-17 01:08:02 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:08:02.525467 | orchestrator | 2026-03-17 01:08:02 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:05.560334 | orchestrator | 2026-03-17 01:08:05 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:08:05.562113 | orchestrator | 2026-03-17 01:08:05 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:08:05.563419 | orchestrator | 2026-03-17 01:08:05 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:08:05.565054 | orchestrator | 2026-03-17 01:08:05 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:08:05.565287 | orchestrator | 2026-03-17 01:08:05 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:08.605946 | orchestrator | 2026-03-17 01:08:08 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state STARTED 2026-03-17 01:08:08.607837 | orchestrator | 2026-03-17 01:08:08 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:08:08.609556 | orchestrator | 2026-03-17 01:08:08 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:08:08.611434 | orchestrator | 2026-03-17 01:08:08 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:08:08.611620 | orchestrator | 2026-03-17 01:08:08 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:11.657557 | orchestrator | 2026-03-17 01:08:11.657624 | orchestrator | 2026-03-17 01:08:11.657645 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:08:11.657654 | orchestrator | 2026-03-17 01:08:11.657662 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:08:11.657688 | orchestrator | Tuesday 17 March 2026 01:05:38 +0000 (0:00:00.199) 0:00:00.199 ********* 2026-03-17 01:08:11.657696 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:08:11.657704 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:08:11.657712 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:08:11.657735 | orchestrator | 2026-03-17 01:08:11.657745 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:08:11.657757 | orchestrator | Tuesday 17 March 2026 01:05:38 +0000 (0:00:00.247) 0:00:00.447 ********* 2026-03-17 01:08:11.657776 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-17 01:08:11.657790 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-17 01:08:11.657949 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-17 01:08:11.657962 | orchestrator | 2026-03-17 01:08:11.657970 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-17 01:08:11.657977 | orchestrator | 2026-03-17 01:08:11.657984 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-17 01:08:11.657992 | orchestrator | Tuesday 17 March 2026 01:05:38 +0000 (0:00:00.332) 0:00:00.780 ********* 2026-03-17 01:08:11.657999 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:08:11.658007 | orchestrator | 2026-03-17 01:08:11.658051 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-03-17 01:08:11.658060 | orchestrator | Tuesday 17 March 2026 01:05:39 +0000 (0:00:00.490) 0:00:01.270 ********* 2026-03-17 01:08:11.658068 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-17 01:08:11.658075 | orchestrator | 2026-03-17 01:08:11.658082 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-03-17 01:08:11.658090 | orchestrator | Tuesday 17 March 2026 01:05:42 +0000 (0:00:03.591) 0:00:04.862 ********* 2026-03-17 01:08:11.658117 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-17 01:08:11.658138 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-17 01:08:11.658146 | orchestrator | 2026-03-17 01:08:11.658154 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-17 01:08:11.658162 | orchestrator | Tuesday 17 March 2026 01:05:48 +0000 (0:00:05.918) 0:00:10.780 ********* 2026-03-17 01:08:11.658169 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-17 01:08:11.658177 | orchestrator | 2026-03-17 01:08:11.658184 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-17 01:08:11.658191 | orchestrator | Tuesday 17 March 2026 01:05:51 +0000 (0:00:02.949) 0:00:13.730 ********* 2026-03-17 01:08:11.658199 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-17 01:08:11.658206 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-17 01:08:11.658214 | orchestrator | 2026-03-17 01:08:11.658221 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-17 01:08:11.658228 | orchestrator | Tuesday 17 March 2026 01:05:55 +0000 (0:00:03.334) 0:00:17.064 ********* 2026-03-17 01:08:11.658236 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-17 01:08:11.658243 | orchestrator | 2026-03-17 01:08:11.658250 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-03-17 01:08:11.658257 | orchestrator | Tuesday 17 March 2026 01:05:58 +0000 (0:00:03.114) 0:00:20.179 ********* 2026-03-17 01:08:11.658265 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-17 01:08:11.658272 | orchestrator | 2026-03-17 01:08:11.658279 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-17 01:08:11.658286 | orchestrator | Tuesday 17 March 2026 01:06:01 +0000 (0:00:03.711) 0:00:23.891 ********* 2026-03-17 01:08:11.658317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 01:08:11.658388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 01:08:11.658401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 01:08:11.658414 | orchestrator | 2026-03-17 01:08:11.658422 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-17 01:08:11.658430 | orchestrator | Tuesday 17 March 2026 01:06:06 +0000 (0:00:04.023) 0:00:27.914 ********* 2026-03-17 01:08:11.658437 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:08:11.658445 | orchestrator | 2026-03-17 01:08:11.658459 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-17 01:08:11.658470 | orchestrator | Tuesday 17 March 2026 01:06:06 +0000 (0:00:00.571) 0:00:28.486 ********* 2026-03-17 01:08:11.658477 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:11.658485 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:08:11.658492 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:08:11.658499 | orchestrator | 2026-03-17 01:08:11.658507 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-17 01:08:11.658514 | orchestrator | Tuesday 17 March 2026 01:06:10 +0000 (0:00:04.114) 0:00:32.600 ********* 2026-03-17 01:08:11.658523 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-17 01:08:11.658536 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-17 01:08:11.658548 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-17 01:08:11.658559 | orchestrator | 2026-03-17 01:08:11.658571 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-17 01:08:11.658584 | orchestrator | Tuesday 17 March 2026 01:06:12 +0000 (0:00:01.360) 0:00:33.960 ********* 2026-03-17 01:08:11.658598 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-17 01:08:11.658607 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-17 01:08:11.658614 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-17 01:08:11.658621 | orchestrator | 2026-03-17 01:08:11.658629 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-17 01:08:11.658636 | orchestrator | Tuesday 17 March 2026 01:06:13 +0000 (0:00:01.256) 0:00:35.217 ********* 2026-03-17 01:08:11.658643 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:08:11.658650 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:08:11.658657 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:08:11.658664 | orchestrator | 2026-03-17 01:08:11.658672 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-17 01:08:11.658679 | orchestrator | Tuesday 17 March 2026 01:06:14 +0000 (0:00:00.874) 0:00:36.091 ********* 2026-03-17 01:08:11.658686 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:11.658693 | orchestrator | 2026-03-17 01:08:11.658700 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-17 01:08:11.658708 | orchestrator | Tuesday 17 March 2026 01:06:14 +0000 (0:00:00.118) 0:00:36.210 ********* 2026-03-17 01:08:11.658715 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:11.658722 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:11.658729 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:11.658736 | orchestrator | 2026-03-17 01:08:11.658744 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-17 01:08:11.658756 | orchestrator | Tuesday 17 March 2026 01:06:14 +0000 (0:00:00.266) 0:00:36.477 ********* 2026-03-17 01:08:11.658764 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:08:11.658771 | orchestrator | 2026-03-17 01:08:11.658778 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-17 01:08:11.658785 | orchestrator | Tuesday 17 March 2026 01:06:15 +0000 (0:00:00.499) 0:00:36.976 ********* 2026-03-17 01:08:11.658799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 01:08:11.658812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 01:08:11.658821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 01:08:11.658856 | orchestrator | 2026-03-17 01:08:11.658864 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-17 01:08:11.658872 | orchestrator | Tuesday 17 March 2026 01:06:19 +0000 (0:00:04.027) 0:00:41.004 ********* 2026-03-17 01:08:11.658886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-17 01:08:11.658895 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:11.658929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-17 01:08:11.658942 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:11.658959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-17 01:08:11.658967 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:11.658975 | orchestrator | 2026-03-17 01:08:11.658982 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-17 01:08:11.658989 | orchestrator | Tuesday 17 March 2026 01:06:21 +0000 (0:00:02.800) 0:00:43.804 ********* 2026-03-17 01:08:11.658997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-17 01:08:11.659009 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:11.659024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-17 01:08:11.659032 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:11.659040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-17 01:08:11.659052 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:11.659061 | orchestrator | 2026-03-17 01:08:11.659074 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-17 01:08:11.659084 | orchestrator | Tuesday 17 March 2026 01:06:25 +0000 (0:00:03.445) 0:00:47.250 ********* 2026-03-17 01:08:11.659097 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:11.659116 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:11.659126 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:11.659137 | orchestrator | 2026-03-17 01:08:11.659150 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-17 01:08:11.659161 | orchestrator | Tuesday 17 March 2026 01:06:28 +0000 (0:00:03.368) 0:00:50.618 ********* 2026-03-17 01:08:11.659174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 01:08:11.659203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 01:08:11.659224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 01:08:11.659233 | orchestrator | 2026-03-17 01:08:11.659241 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-17 01:08:11.659248 | orchestrator | Tuesday 17 March 2026 01:06:32 +0000 (0:00:03.946) 0:00:54.564 ********* 2026-03-17 01:08:11.659255 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:11.659263 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:08:11.659270 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:08:11.659277 | orchestrator | 2026-03-17 01:08:11.659285 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-17 01:08:11.659295 | orchestrator | Tuesday 17 March 2026 01:06:38 +0000 (0:00:05.343) 0:00:59.908 ********* 2026-03-17 01:08:11.659310 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:11.659328 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:11.659355 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:11.659369 | orchestrator | 2026-03-17 01:08:11.659381 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-03-17 01:08:11.659393 | orchestrator | Tuesday 17 March 2026 01:06:44 +0000 (0:00:06.503) 0:01:06.412 ********* 2026-03-17 01:08:11.659404 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:11.659430 | orchestrator | skipping: [testbed-node2026-03-17 01:08:11 | INFO  | Task ddb42b81-f813-4702-82f8-d8627a783361 is in state SUCCESS 2026-03-17 01:08:11.659444 | orchestrator | 2026-03-17 01:08:11 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:08:11.659456 | orchestrator | 2026-03-17 01:08:11 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:08:11.659480 | orchestrator | 2026-03-17 01:08:11 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:08:11.659493 | orchestrator | 2026-03-17 01:08:11 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:11.659506 | orchestrator | -0] 2026-03-17 01:08:11.659518 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:11.659530 | orchestrator | 2026-03-17 01:08:11.659542 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-17 01:08:11.659555 | orchestrator | Tuesday 17 March 2026 01:06:47 +0000 (0:00:03.321) 0:01:09.733 ********* 2026-03-17 01:08:11.659567 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:11.659579 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:11.659592 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:11.659603 | orchestrator | 2026-03-17 01:08:11.659615 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-17 01:08:11.659627 | orchestrator | Tuesday 17 March 2026 01:06:50 +0000 (0:00:02.765) 0:01:12.499 ********* 2026-03-17 01:08:11.659640 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:11.659653 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:11.659664 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:11.659676 | orchestrator | 2026-03-17 01:08:11.659688 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-17 01:08:11.659701 | orchestrator | Tuesday 17 March 2026 01:06:54 +0000 (0:00:04.218) 0:01:16.718 ********* 2026-03-17 01:08:11.659711 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:11.659718 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:11.659725 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:11.659732 | orchestrator | 2026-03-17 01:08:11.659740 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-17 01:08:11.659747 | orchestrator | Tuesday 17 March 2026 01:06:55 +0000 (0:00:00.260) 0:01:16.979 ********* 2026-03-17 01:08:11.659754 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-17 01:08:11.659764 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:11.659776 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-17 01:08:11.659794 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:11.659807 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-17 01:08:11.659818 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:11.659830 | orchestrator | 2026-03-17 01:08:11.659841 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-17 01:08:11.659851 | orchestrator | Tuesday 17 March 2026 01:06:58 +0000 (0:00:03.293) 0:01:20.272 ********* 2026-03-17 01:08:11.659863 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:11.659876 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:08:11.659887 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:08:11.659900 | orchestrator | 2026-03-17 01:08:11.659909 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-03-17 01:08:11.659916 | orchestrator | Tuesday 17 March 2026 01:07:02 +0000 (0:00:04.613) 0:01:24.886 ********* 2026-03-17 01:08:11.659940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 01:08:11.659957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 01:08:11.659965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 01:08:11.659977 | orchestrator | 2026-03-17 01:08:11.659985 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-17 01:08:11.659992 | orchestrator | Tuesday 17 March 2026 01:07:06 +0000 (0:00:03.177) 0:01:28.064 ********* 2026-03-17 01:08:11.659999 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:11.660007 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:11.660014 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:11.660021 | orchestrator | 2026-03-17 01:08:11.660028 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-03-17 01:08:11.660035 | orchestrator | Tuesday 17 March 2026 01:07:06 +0000 (0:00:00.257) 0:01:28.321 ********* 2026-03-17 01:08:11.660043 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:11.660050 | orchestrator | 2026-03-17 01:08:11.660057 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-03-17 01:08:11.660068 | orchestrator | Tuesday 17 March 2026 01:07:08 +0000 (0:00:01.926) 0:01:30.248 ********* 2026-03-17 01:08:11.660079 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:11.660086 | orchestrator | 2026-03-17 01:08:11.660094 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-03-17 01:08:11.660101 | orchestrator | Tuesday 17 March 2026 01:07:10 +0000 (0:00:02.218) 0:01:32.467 ********* 2026-03-17 01:08:11.660108 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:11.660115 | orchestrator | 2026-03-17 01:08:11.660123 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-03-17 01:08:11.660130 | orchestrator | Tuesday 17 March 2026 01:07:13 +0000 (0:00:02.450) 0:01:34.917 ********* 2026-03-17 01:08:11.660137 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:11.660144 | orchestrator | 2026-03-17 01:08:11.660151 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-03-17 01:08:11.660158 | orchestrator | Tuesday 17 March 2026 01:07:41 +0000 (0:00:28.032) 0:02:02.950 ********* 2026-03-17 01:08:11.660166 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:11.660173 | orchestrator | 2026-03-17 01:08:11.660180 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-17 01:08:11.660187 | orchestrator | Tuesday 17 March 2026 01:07:43 +0000 (0:00:02.019) 0:02:04.970 ********* 2026-03-17 01:08:11.660194 | orchestrator | 2026-03-17 01:08:11.660202 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-17 01:08:11.660209 | orchestrator | Tuesday 17 March 2026 01:07:43 +0000 (0:00:00.342) 0:02:05.313 ********* 2026-03-17 01:08:11.660216 | orchestrator | 2026-03-17 01:08:11.660223 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-17 01:08:11.660230 | orchestrator | Tuesday 17 March 2026 01:07:43 +0000 (0:00:00.094) 0:02:05.407 ********* 2026-03-17 01:08:11.660237 | orchestrator | 2026-03-17 01:08:11.660244 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-03-17 01:08:11.660252 | orchestrator | Tuesday 17 March 2026 01:07:43 +0000 (0:00:00.088) 0:02:05.496 ********* 2026-03-17 01:08:11.660259 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:11.660266 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:08:11.660273 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:08:11.660280 | orchestrator | 2026-03-17 01:08:11.660288 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:08:11.660296 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-17 01:08:11.660303 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-17 01:08:11.660311 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-17 01:08:11.660322 | orchestrator | 2026-03-17 01:08:11.660330 | orchestrator | 2026-03-17 01:08:11.660337 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:08:11.660375 | orchestrator | Tuesday 17 March 2026 01:08:10 +0000 (0:00:26.888) 0:02:32.384 ********* 2026-03-17 01:08:11.660387 | orchestrator | =============================================================================== 2026-03-17 01:08:11.660400 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.03s 2026-03-17 01:08:11.660408 | orchestrator | glance : Restart glance-api container ---------------------------------- 26.89s 2026-03-17 01:08:11.660415 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 6.50s 2026-03-17 01:08:11.660423 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 5.92s 2026-03-17 01:08:11.660430 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.34s 2026-03-17 01:08:11.660437 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.61s 2026-03-17 01:08:11.660445 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.22s 2026-03-17 01:08:11.660452 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.11s 2026-03-17 01:08:11.660459 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.03s 2026-03-17 01:08:11.660466 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.02s 2026-03-17 01:08:11.660473 | orchestrator | glance : Copying over config.json files for services -------------------- 3.95s 2026-03-17 01:08:11.660481 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.71s 2026-03-17 01:08:11.660488 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.59s 2026-03-17 01:08:11.660495 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.45s 2026-03-17 01:08:11.660502 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.37s 2026-03-17 01:08:11.660509 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.33s 2026-03-17 01:08:11.660517 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.32s 2026-03-17 01:08:11.660524 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.29s 2026-03-17 01:08:11.660531 | orchestrator | glance : Check glance containers ---------------------------------------- 3.18s 2026-03-17 01:08:11.660539 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.11s 2026-03-17 01:08:14.684014 | orchestrator | 2026-03-17 01:08:14 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:08:14.684726 | orchestrator | 2026-03-17 01:08:14 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:08:14.685518 | orchestrator | 2026-03-17 01:08:14 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:08:14.686263 | orchestrator | 2026-03-17 01:08:14 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:08:14.686410 | orchestrator | 2026-03-17 01:08:14 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:17.730095 | orchestrator | 2026-03-17 01:08:17 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:08:17.734120 | orchestrator | 2026-03-17 01:08:17 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:08:17.735295 | orchestrator | 2026-03-17 01:08:17 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:08:17.738177 | orchestrator | 2026-03-17 01:08:17 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:08:17.738220 | orchestrator | 2026-03-17 01:08:17 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:20.765820 | orchestrator | 2026-03-17 01:08:20 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:08:20.766622 | orchestrator | 2026-03-17 01:08:20 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:08:20.767615 | orchestrator | 2026-03-17 01:08:20 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:08:20.768750 | orchestrator | 2026-03-17 01:08:20 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:08:20.768779 | orchestrator | 2026-03-17 01:08:20 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:23.808583 | orchestrator | 2026-03-17 01:08:23 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:08:23.810795 | orchestrator | 2026-03-17 01:08:23 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:08:23.812521 | orchestrator | 2026-03-17 01:08:23 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:08:23.813819 | orchestrator | 2026-03-17 01:08:23 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:08:23.813873 | orchestrator | 2026-03-17 01:08:23 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:26.850048 | orchestrator | 2026-03-17 01:08:26 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:08:26.852952 | orchestrator | 2026-03-17 01:08:26 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:08:26.854401 | orchestrator | 2026-03-17 01:08:26 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:08:26.856850 | orchestrator | 2026-03-17 01:08:26 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:08:26.856902 | orchestrator | 2026-03-17 01:08:26 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:29.897283 | orchestrator | 2026-03-17 01:08:29 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:08:29.901261 | orchestrator | 2026-03-17 01:08:29 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:08:29.903204 | orchestrator | 2026-03-17 01:08:29 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:08:29.905291 | orchestrator | 2026-03-17 01:08:29 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:08:29.905412 | orchestrator | 2026-03-17 01:08:29 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:32.947725 | orchestrator | 2026-03-17 01:08:32 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:08:32.949281 | orchestrator | 2026-03-17 01:08:32 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:08:32.951701 | orchestrator | 2026-03-17 01:08:32 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:08:32.953394 | orchestrator | 2026-03-17 01:08:32 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:08:32.953451 | orchestrator | 2026-03-17 01:08:32 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:35.992760 | orchestrator | 2026-03-17 01:08:35 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:08:35.993741 | orchestrator | 2026-03-17 01:08:35 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:08:35.995106 | orchestrator | 2026-03-17 01:08:35 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:08:35.996802 | orchestrator | 2026-03-17 01:08:35 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:08:35.996852 | orchestrator | 2026-03-17 01:08:35 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:39.041685 | orchestrator | 2026-03-17 01:08:39 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:08:39.042273 | orchestrator | 2026-03-17 01:08:39 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:08:39.043774 | orchestrator | 2026-03-17 01:08:39 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:08:39.046127 | orchestrator | 2026-03-17 01:08:39 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:08:39.046169 | orchestrator | 2026-03-17 01:08:39 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:42.086938 | orchestrator | 2026-03-17 01:08:42 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:08:42.087160 | orchestrator | 2026-03-17 01:08:42 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:08:42.088186 | orchestrator | 2026-03-17 01:08:42 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:08:42.088887 | orchestrator | 2026-03-17 01:08:42 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:08:42.089278 | orchestrator | 2026-03-17 01:08:42 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:45.127187 | orchestrator | 2026-03-17 01:08:45 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:08:45.128675 | orchestrator | 2026-03-17 01:08:45 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:08:45.129716 | orchestrator | 2026-03-17 01:08:45 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:08:45.132451 | orchestrator | 2026-03-17 01:08:45 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state STARTED 2026-03-17 01:08:45.132507 | orchestrator | 2026-03-17 01:08:45 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:48.177225 | orchestrator | 2026-03-17 01:08:48 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:08:48.178702 | orchestrator | 2026-03-17 01:08:48 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:08:48.180846 | orchestrator | 2026-03-17 01:08:48 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:08:48.182661 | orchestrator | 2026-03-17 01:08:48 | INFO  | Task 28ba1124-53a0-4f9c-bfda-7d6a3bd76ed1 is in state SUCCESS 2026-03-17 01:08:48.184368 | orchestrator | 2026-03-17 01:08:48.184410 | orchestrator | 2026-03-17 01:08:48.184419 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:08:48.184425 | orchestrator | 2026-03-17 01:08:48.184431 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:08:48.184437 | orchestrator | Tuesday 17 March 2026 01:06:15 +0000 (0:00:00.238) 0:00:00.238 ********* 2026-03-17 01:08:48.184443 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:08:48.184450 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:08:48.184456 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:08:48.184460 | orchestrator | 2026-03-17 01:08:48.184464 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:08:48.184467 | orchestrator | Tuesday 17 March 2026 01:06:16 +0000 (0:00:00.449) 0:00:00.688 ********* 2026-03-17 01:08:48.184471 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-17 01:08:48.184475 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-17 01:08:48.184491 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-17 01:08:48.184495 | orchestrator | 2026-03-17 01:08:48.184501 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-17 01:08:48.184508 | orchestrator | 2026-03-17 01:08:48.184513 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-17 01:08:48.184519 | orchestrator | Tuesday 17 March 2026 01:06:16 +0000 (0:00:00.692) 0:00:01.380 ********* 2026-03-17 01:08:48.184526 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:08:48.184531 | orchestrator | 2026-03-17 01:08:48.184534 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-03-17 01:08:48.184538 | orchestrator | Tuesday 17 March 2026 01:06:17 +0000 (0:00:00.827) 0:00:02.207 ********* 2026-03-17 01:08:48.184542 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-17 01:08:48.184545 | orchestrator | 2026-03-17 01:08:48.184549 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-03-17 01:08:48.184552 | orchestrator | Tuesday 17 March 2026 01:06:21 +0000 (0:00:03.513) 0:00:05.720 ********* 2026-03-17 01:08:48.184556 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-17 01:08:48.184568 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-17 01:08:48.184572 | orchestrator | 2026-03-17 01:08:48.184575 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-17 01:08:48.184579 | orchestrator | Tuesday 17 March 2026 01:06:27 +0000 (0:00:06.410) 0:00:12.131 ********* 2026-03-17 01:08:48.184582 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-17 01:08:48.184586 | orchestrator | 2026-03-17 01:08:48.184590 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-17 01:08:48.184593 | orchestrator | Tuesday 17 March 2026 01:06:30 +0000 (0:00:03.320) 0:00:15.452 ********* 2026-03-17 01:08:48.184597 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-17 01:08:48.184600 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-17 01:08:48.184604 | orchestrator | 2026-03-17 01:08:48.184607 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-17 01:08:48.184611 | orchestrator | Tuesday 17 March 2026 01:06:34 +0000 (0:00:04.060) 0:00:19.512 ********* 2026-03-17 01:08:48.184614 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-17 01:08:48.184618 | orchestrator | 2026-03-17 01:08:48.184621 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-03-17 01:08:48.184625 | orchestrator | Tuesday 17 March 2026 01:06:38 +0000 (0:00:03.526) 0:00:23.039 ********* 2026-03-17 01:08:48.184628 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-17 01:08:48.184664 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-17 01:08:48.184669 | orchestrator | 2026-03-17 01:08:48.184673 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-17 01:08:48.184679 | orchestrator | Tuesday 17 March 2026 01:06:45 +0000 (0:00:07.076) 0:00:30.115 ********* 2026-03-17 01:08:48.184690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 01:08:48.184781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 01:08:48.184789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 01:08:48.184914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.184920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.184923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.184927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.184938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.184942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.184948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.184952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.184956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.184962 | orchestrator | 2026-03-17 01:08:48.184966 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-17 01:08:48.184970 | orchestrator | Tuesday 17 March 2026 01:06:47 +0000 (0:00:02.241) 0:00:32.357 ********* 2026-03-17 01:08:48.184973 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:48.184977 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:48.184980 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:48.184984 | orchestrator | 2026-03-17 01:08:48.184988 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-17 01:08:48.184991 | orchestrator | Tuesday 17 March 2026 01:06:48 +0000 (0:00:00.237) 0:00:32.594 ********* 2026-03-17 01:08:48.184995 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:08:48.184999 | orchestrator | 2026-03-17 01:08:48.185008 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-17 01:08:48.185014 | orchestrator | Tuesday 17 March 2026 01:06:48 +0000 (0:00:00.494) 0:00:33.088 ********* 2026-03-17 01:08:48.185020 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-17 01:08:48.185025 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-17 01:08:48.185031 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-17 01:08:48.185037 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-17 01:08:48.185042 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-17 01:08:48.185048 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-17 01:08:48.185054 | orchestrator | 2026-03-17 01:08:48.185060 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-17 01:08:48.185065 | orchestrator | Tuesday 17 March 2026 01:06:50 +0000 (0:00:01.617) 0:00:34.706 ********* 2026-03-17 01:08:48.185072 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-17 01:08:48.185082 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-17 01:08:48.185089 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-17 01:08:48.185101 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-17 01:08:48.185113 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-17 01:08:48.185120 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-17 01:08:48.185128 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-17 01:08:48.185133 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-17 01:08:48.185139 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-17 01:08:48.185147 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-17 01:08:48.185151 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-17 01:08:48.185157 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-17 01:08:48.185160 | orchestrator | 2026-03-17 01:08:48.185164 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-17 01:08:48.185168 | orchestrator | Tuesday 17 March 2026 01:06:53 +0000 (0:00:03.038) 0:00:37.744 ********* 2026-03-17 01:08:48.185171 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-17 01:08:48.185177 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-17 01:08:48.185181 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-17 01:08:48.185184 | orchestrator | 2026-03-17 01:08:48.185188 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-17 01:08:48.185192 | orchestrator | Tuesday 17 March 2026 01:06:55 +0000 (0:00:02.267) 0:00:40.011 ********* 2026-03-17 01:08:48.185195 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-03-17 01:08:48.185198 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-03-17 01:08:48.185202 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-03-17 01:08:48.185205 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-03-17 01:08:48.185209 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-03-17 01:08:48.185212 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-03-17 01:08:48.185216 | orchestrator | 2026-03-17 01:08:48.185219 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-17 01:08:48.185223 | orchestrator | Tuesday 17 March 2026 01:06:58 +0000 (0:00:02.776) 0:00:42.788 ********* 2026-03-17 01:08:48.185226 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-17 01:08:48.185230 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-17 01:08:48.185233 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-17 01:08:48.185236 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-17 01:08:48.185402 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-17 01:08:48.185410 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-17 01:08:48.185415 | orchestrator | 2026-03-17 01:08:48.185421 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-17 01:08:48.185428 | orchestrator | Tuesday 17 March 2026 01:06:59 +0000 (0:00:01.003) 0:00:43.791 ********* 2026-03-17 01:08:48.185435 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:48.185441 | orchestrator | 2026-03-17 01:08:48.185448 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-17 01:08:48.185454 | orchestrator | Tuesday 17 March 2026 01:06:59 +0000 (0:00:00.247) 0:00:44.038 ********* 2026-03-17 01:08:48.185461 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:48.185465 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:48.185482 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:48.185486 | orchestrator | 2026-03-17 01:08:48.185490 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-17 01:08:48.185493 | orchestrator | Tuesday 17 March 2026 01:06:59 +0000 (0:00:00.412) 0:00:44.451 ********* 2026-03-17 01:08:48.185497 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:08:48.185500 | orchestrator | 2026-03-17 01:08:48.185504 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-17 01:08:48.185507 | orchestrator | Tuesday 17 March 2026 01:07:00 +0000 (0:00:00.769) 0:00:45.220 ********* 2026-03-17 01:08:48.185511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 01:08:48.185525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 01:08:48.185529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 01:08:48.185533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.185549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.185553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.185557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.185566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.185569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.185573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.185580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.185583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.185590 | orchestrator | 2026-03-17 01:08:48.185594 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-17 01:08:48.185597 | orchestrator | Tuesday 17 March 2026 01:07:04 +0000 (0:00:04.030) 0:00:49.251 ********* 2026-03-17 01:08:48.185603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-17 01:08:48.185607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:08:48.185610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:08:48.185617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:08:48.185621 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:48.185624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-17 01:08:48.185632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:08:48.185637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:08:48.185641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:08:48.185645 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:48.185649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-17 01:08:48.185654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:08:48.185661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:08:48.185666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:08:48.185670 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:48.185673 | orchestrator | 2026-03-17 01:08:48.185677 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-17 01:08:48.185684 | orchestrator | Tuesday 17 March 2026 01:07:05 +0000 (0:00:00.691) 0:00:49.942 ********* 2026-03-17 01:08:48.185690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-17 01:08:48.185699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:08:48.185711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:08:48.185721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:08:48.185727 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:48.185736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-17 01:08:48.185742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:08:48.185747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:08:48.185753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:08:48.185765 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:48.185771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-17 01:08:48.185776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:08:48.185785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:08:48.185791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:08:48.185797 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:48.185803 | orchestrator | 2026-03-17 01:08:48.185810 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-17 01:08:48.185816 | orchestrator | Tuesday 17 March 2026 01:07:06 +0000 (0:00:01.054) 0:00:50.996 ********* 2026-03-17 01:08:48.185823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 01:08:48.185838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 01:08:48.185847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 01:08:48.185852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.185856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.185860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.185871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.185875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.185880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.185884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.185888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.185892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.185897 | orchestrator | 2026-03-17 01:08:48.185901 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-17 01:08:48.185905 | orchestrator | Tuesday 17 March 2026 01:07:10 +0000 (0:00:03.784) 0:00:54.781 ********* 2026-03-17 01:08:48.185908 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-17 01:08:48.185914 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-17 01:08:48.185917 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-17 01:08:48.185921 | orchestrator | 2026-03-17 01:08:48.185924 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-17 01:08:48.185928 | orchestrator | Tuesday 17 March 2026 01:07:12 +0000 (0:00:02.313) 0:00:57.094 ********* 2026-03-17 01:08:48.185931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 01:08:48.185937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 01:08:48.185941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 01:08:48.185947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.185954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.185958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.185962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.185967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.185971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.185976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.185983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.185987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.185991 | orchestrator | 2026-03-17 01:08:48.185994 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-17 01:08:48.185998 | orchestrator | Tuesday 17 March 2026 01:07:22 +0000 (0:00:10.139) 0:01:07.234 ********* 2026-03-17 01:08:48.186002 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:48.186005 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:08:48.186009 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:08:48.186039 | orchestrator | 2026-03-17 01:08:48.186044 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-17 01:08:48.186049 | orchestrator | Tuesday 17 March 2026 01:07:24 +0000 (0:00:01.546) 0:01:08.781 ********* 2026-03-17 01:08:48.186055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-17 01:08:48.186060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:08:48.186073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:08:48.186083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:08:48.186090 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:48.186097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-17 01:08:48.186106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:08:48.186112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:08:48.186121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:08:48.186125 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:48.186132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-17 01:08:48.186136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:08:48.186140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:08:48.186147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:08:48.186153 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:48.186158 | orchestrator | 2026-03-17 01:08:48.186162 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-17 01:08:48.186166 | orchestrator | Tuesday 17 March 2026 01:07:24 +0000 (0:00:00.627) 0:01:09.408 ********* 2026-03-17 01:08:48.186170 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:48.186174 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:48.186178 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:48.186182 | orchestrator | 2026-03-17 01:08:48.186187 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-03-17 01:08:48.186191 | orchestrator | Tuesday 17 March 2026 01:07:25 +0000 (0:00:00.317) 0:01:09.726 ********* 2026-03-17 01:08:48.186195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 01:08:48.186202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 01:08:48.186207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-17 01:08:48.186213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.186220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.186224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.186229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.186236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.186240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.186248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.186255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.186259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:48.186263 | orchestrator | 2026-03-17 01:08:48.186267 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-17 01:08:48.186271 | orchestrator | Tuesday 17 March 2026 01:07:28 +0000 (0:00:02.986) 0:01:12.712 ********* 2026-03-17 01:08:48.186276 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:48.186280 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:48.186284 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:48.186346 | orchestrator | 2026-03-17 01:08:48.186351 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-17 01:08:48.186356 | orchestrator | Tuesday 17 March 2026 01:07:28 +0000 (0:00:00.436) 0:01:13.149 ********* 2026-03-17 01:08:48.186359 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:48.186364 | orchestrator | 2026-03-17 01:08:48.186368 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-17 01:08:48.186372 | orchestrator | Tuesday 17 March 2026 01:07:30 +0000 (0:00:02.323) 0:01:15.472 ********* 2026-03-17 01:08:48.186376 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:48.186380 | orchestrator | 2026-03-17 01:08:48.186384 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-17 01:08:48.186391 | orchestrator | Tuesday 17 March 2026 01:07:33 +0000 (0:00:02.396) 0:01:17.869 ********* 2026-03-17 01:08:48.186395 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:48.186399 | orchestrator | 2026-03-17 01:08:48.186403 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-17 01:08:48.186407 | orchestrator | Tuesday 17 March 2026 01:07:50 +0000 (0:00:17.636) 0:01:35.505 ********* 2026-03-17 01:08:48.186412 | orchestrator | 2026-03-17 01:08:48.186416 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-17 01:08:48.186420 | orchestrator | Tuesday 17 March 2026 01:07:51 +0000 (0:00:00.061) 0:01:35.567 ********* 2026-03-17 01:08:48.186424 | orchestrator | 2026-03-17 01:08:48.186428 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-17 01:08:48.186432 | orchestrator | Tuesday 17 March 2026 01:07:51 +0000 (0:00:00.061) 0:01:35.628 ********* 2026-03-17 01:08:48.186436 | orchestrator | 2026-03-17 01:08:48.186439 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-17 01:08:48.186446 | orchestrator | Tuesday 17 March 2026 01:07:51 +0000 (0:00:00.062) 0:01:35.691 ********* 2026-03-17 01:08:48.186449 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:48.186453 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:08:48.186456 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:08:48.186460 | orchestrator | 2026-03-17 01:08:48.186463 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-17 01:08:48.186467 | orchestrator | Tuesday 17 March 2026 01:08:10 +0000 (0:00:19.224) 0:01:54.916 ********* 2026-03-17 01:08:48.186470 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:08:48.186474 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:08:48.186477 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:48.186480 | orchestrator | 2026-03-17 01:08:48.186484 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-17 01:08:48.186487 | orchestrator | Tuesday 17 March 2026 01:08:18 +0000 (0:00:07.819) 0:02:02.735 ********* 2026-03-17 01:08:48.186491 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:48.186494 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:08:48.186498 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:08:48.186501 | orchestrator | 2026-03-17 01:08:48.186505 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-17 01:08:48.186508 | orchestrator | Tuesday 17 March 2026 01:08:41 +0000 (0:00:22.974) 0:02:25.710 ********* 2026-03-17 01:08:48.186512 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:48.186515 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:08:48.186521 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:08:48.186524 | orchestrator | 2026-03-17 01:08:48.186528 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-17 01:08:48.186532 | orchestrator | Tuesday 17 March 2026 01:08:47 +0000 (0:00:05.919) 0:02:31.630 ********* 2026-03-17 01:08:48.186535 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:48.186539 | orchestrator | 2026-03-17 01:08:48.186542 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:08:48.186546 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-17 01:08:48.186550 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 01:08:48.186554 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 01:08:48.186557 | orchestrator | 2026-03-17 01:08:48.186561 | orchestrator | 2026-03-17 01:08:48.186565 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:08:48.186568 | orchestrator | Tuesday 17 March 2026 01:08:47 +0000 (0:00:00.233) 0:02:31.863 ********* 2026-03-17 01:08:48.186571 | orchestrator | =============================================================================== 2026-03-17 01:08:48.186575 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 22.97s 2026-03-17 01:08:48.186578 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 19.22s 2026-03-17 01:08:48.186582 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 17.64s 2026-03-17 01:08:48.186585 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.14s 2026-03-17 01:08:48.186589 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 7.82s 2026-03-17 01:08:48.186592 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.08s 2026-03-17 01:08:48.186596 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.41s 2026-03-17 01:08:48.186599 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 5.92s 2026-03-17 01:08:48.186603 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.06s 2026-03-17 01:08:48.186608 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.03s 2026-03-17 01:08:48.186612 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.78s 2026-03-17 01:08:48.186615 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.53s 2026-03-17 01:08:48.186619 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.51s 2026-03-17 01:08:48.186622 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.32s 2026-03-17 01:08:48.186626 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.04s 2026-03-17 01:08:48.186629 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.99s 2026-03-17 01:08:48.186633 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.78s 2026-03-17 01:08:48.186639 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.40s 2026-03-17 01:08:48.186642 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.32s 2026-03-17 01:08:48.186646 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.31s 2026-03-17 01:08:48.186650 | orchestrator | 2026-03-17 01:08:48 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:51.227245 | orchestrator | 2026-03-17 01:08:51 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:08:51.230263 | orchestrator | 2026-03-17 01:08:51 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:08:51.233043 | orchestrator | 2026-03-17 01:08:51 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:08:51.233091 | orchestrator | 2026-03-17 01:08:51 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:54.279906 | orchestrator | 2026-03-17 01:08:54 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:08:54.283486 | orchestrator | 2026-03-17 01:08:54 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:08:54.287604 | orchestrator | 2026-03-17 01:08:54 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:08:54.287656 | orchestrator | 2026-03-17 01:08:54 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:57.328695 | orchestrator | 2026-03-17 01:08:57 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:08:57.332143 | orchestrator | 2026-03-17 01:08:57 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:08:57.333697 | orchestrator | 2026-03-17 01:08:57 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:08:57.333821 | orchestrator | 2026-03-17 01:08:57 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:00.378329 | orchestrator | 2026-03-17 01:09:00 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:09:00.381772 | orchestrator | 2026-03-17 01:09:00 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:09:00.383188 | orchestrator | 2026-03-17 01:09:00 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:09:00.383390 | orchestrator | 2026-03-17 01:09:00 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:03.417980 | orchestrator | 2026-03-17 01:09:03 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:09:03.419620 | orchestrator | 2026-03-17 01:09:03 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:09:03.421465 | orchestrator | 2026-03-17 01:09:03 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:09:03.421500 | orchestrator | 2026-03-17 01:09:03 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:06.459183 | orchestrator | 2026-03-17 01:09:06 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:09:06.459417 | orchestrator | 2026-03-17 01:09:06 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:09:06.460159 | orchestrator | 2026-03-17 01:09:06 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:09:06.460183 | orchestrator | 2026-03-17 01:09:06 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:09.491369 | orchestrator | 2026-03-17 01:09:09 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:09:09.492866 | orchestrator | 2026-03-17 01:09:09 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:09:09.495076 | orchestrator | 2026-03-17 01:09:09 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:09:09.495213 | orchestrator | 2026-03-17 01:09:09 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:12.530740 | orchestrator | 2026-03-17 01:09:12 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:09:12.531112 | orchestrator | 2026-03-17 01:09:12 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:09:12.531843 | orchestrator | 2026-03-17 01:09:12 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:09:12.531866 | orchestrator | 2026-03-17 01:09:12 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:15.567962 | orchestrator | 2026-03-17 01:09:15 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:09:15.569316 | orchestrator | 2026-03-17 01:09:15 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:09:15.571273 | orchestrator | 2026-03-17 01:09:15 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:09:15.571312 | orchestrator | 2026-03-17 01:09:15 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:18.619407 | orchestrator | 2026-03-17 01:09:18 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:09:18.619487 | orchestrator | 2026-03-17 01:09:18 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:09:18.621448 | orchestrator | 2026-03-17 01:09:18 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:09:18.622160 | orchestrator | 2026-03-17 01:09:18 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:21.659332 | orchestrator | 2026-03-17 01:09:21 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:09:21.662385 | orchestrator | 2026-03-17 01:09:21 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:09:21.664575 | orchestrator | 2026-03-17 01:09:21 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:09:21.664611 | orchestrator | 2026-03-17 01:09:21 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:24.701972 | orchestrator | 2026-03-17 01:09:24 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:09:24.703876 | orchestrator | 2026-03-17 01:09:24 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:09:24.705401 | orchestrator | 2026-03-17 01:09:24 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:09:24.705469 | orchestrator | 2026-03-17 01:09:24 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:27.746430 | orchestrator | 2026-03-17 01:09:27 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:09:27.748373 | orchestrator | 2026-03-17 01:09:27 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:09:27.749604 | orchestrator | 2026-03-17 01:09:27 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:09:27.749636 | orchestrator | 2026-03-17 01:09:27 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:30.793076 | orchestrator | 2026-03-17 01:09:30 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:09:30.796305 | orchestrator | 2026-03-17 01:09:30 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:09:30.798304 | orchestrator | 2026-03-17 01:09:30 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:09:30.798383 | orchestrator | 2026-03-17 01:09:30 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:33.840821 | orchestrator | 2026-03-17 01:09:33 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:09:33.842185 | orchestrator | 2026-03-17 01:09:33 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:09:33.843335 | orchestrator | 2026-03-17 01:09:33 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:09:33.843366 | orchestrator | 2026-03-17 01:09:33 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:36.884434 | orchestrator | 2026-03-17 01:09:36 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:09:36.885417 | orchestrator | 2026-03-17 01:09:36 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:09:36.887381 | orchestrator | 2026-03-17 01:09:36 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:09:36.887421 | orchestrator | 2026-03-17 01:09:36 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:39.920903 | orchestrator | 2026-03-17 01:09:39 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:09:39.922498 | orchestrator | 2026-03-17 01:09:39 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:09:39.923139 | orchestrator | 2026-03-17 01:09:39 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:09:39.923168 | orchestrator | 2026-03-17 01:09:39 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:42.954895 | orchestrator | 2026-03-17 01:09:42 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:09:42.954942 | orchestrator | 2026-03-17 01:09:42 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:09:42.954947 | orchestrator | 2026-03-17 01:09:42 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:09:42.954951 | orchestrator | 2026-03-17 01:09:42 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:45.986260 | orchestrator | 2026-03-17 01:09:45 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:09:45.986697 | orchestrator | 2026-03-17 01:09:45 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:09:45.987403 | orchestrator | 2026-03-17 01:09:45 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:09:45.987431 | orchestrator | 2026-03-17 01:09:45 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:49.016138 | orchestrator | 2026-03-17 01:09:49 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:09:49.016742 | orchestrator | 2026-03-17 01:09:49 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:09:49.018095 | orchestrator | 2026-03-17 01:09:49 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:09:49.018122 | orchestrator | 2026-03-17 01:09:49 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:52.061924 | orchestrator | 2026-03-17 01:09:52 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:09:52.062972 | orchestrator | 2026-03-17 01:09:52 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:09:52.064697 | orchestrator | 2026-03-17 01:09:52 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:09:52.065039 | orchestrator | 2026-03-17 01:09:52 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:55.112470 | orchestrator | 2026-03-17 01:09:55 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:09:55.113102 | orchestrator | 2026-03-17 01:09:55 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:09:55.113961 | orchestrator | 2026-03-17 01:09:55 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:09:55.113986 | orchestrator | 2026-03-17 01:09:55 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:58.147540 | orchestrator | 2026-03-17 01:09:58 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:09:58.150063 | orchestrator | 2026-03-17 01:09:58 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:09:58.151627 | orchestrator | 2026-03-17 01:09:58 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:09:58.152089 | orchestrator | 2026-03-17 01:09:58 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:01.188247 | orchestrator | 2026-03-17 01:10:01 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:10:01.189293 | orchestrator | 2026-03-17 01:10:01 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:10:01.190633 | orchestrator | 2026-03-17 01:10:01 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:10:01.190672 | orchestrator | 2026-03-17 01:10:01 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:04.237303 | orchestrator | 2026-03-17 01:10:04 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:10:04.238875 | orchestrator | 2026-03-17 01:10:04 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:10:04.241943 | orchestrator | 2026-03-17 01:10:04 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:10:04.241981 | orchestrator | 2026-03-17 01:10:04 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:07.281960 | orchestrator | 2026-03-17 01:10:07 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:10:07.283553 | orchestrator | 2026-03-17 01:10:07 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:10:07.285120 | orchestrator | 2026-03-17 01:10:07 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:10:07.285157 | orchestrator | 2026-03-17 01:10:07 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:10.335158 | orchestrator | 2026-03-17 01:10:10 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state STARTED 2026-03-17 01:10:10.335594 | orchestrator | 2026-03-17 01:10:10 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:10:10.336749 | orchestrator | 2026-03-17 01:10:10 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:10:10.336791 | orchestrator | 2026-03-17 01:10:10 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:13.377259 | orchestrator | 2026-03-17 01:10:13 | INFO  | Task ea18328c-e7b3-49f9-80db-8b4e07a1119e is in state SUCCESS 2026-03-17 01:10:13.377970 | orchestrator | 2026-03-17 01:10:13.378099 | orchestrator | 2026-03-17 01:10:13.378212 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:10:13.378231 | orchestrator | 2026-03-17 01:10:13.378244 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:10:13.378255 | orchestrator | Tuesday 17 March 2026 01:08:14 +0000 (0:00:00.251) 0:00:00.251 ********* 2026-03-17 01:10:13.378266 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:10:13.378278 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:10:13.378288 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:10:13.378299 | orchestrator | 2026-03-17 01:10:13.378310 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:10:13.378322 | orchestrator | Tuesday 17 March 2026 01:08:14 +0000 (0:00:00.262) 0:00:00.514 ********* 2026-03-17 01:10:13.378333 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-17 01:10:13.378344 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-17 01:10:13.378356 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-17 01:10:13.378368 | orchestrator | 2026-03-17 01:10:13.378380 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-17 01:10:13.378392 | orchestrator | 2026-03-17 01:10:13.378404 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-17 01:10:13.378801 | orchestrator | Tuesday 17 March 2026 01:08:15 +0000 (0:00:00.334) 0:00:00.849 ********* 2026-03-17 01:10:13.378811 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:10:13.378819 | orchestrator | 2026-03-17 01:10:13.378826 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-17 01:10:13.378834 | orchestrator | Tuesday 17 March 2026 01:08:15 +0000 (0:00:00.465) 0:00:01.314 ********* 2026-03-17 01:10:13.378854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 01:10:13.378864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 01:10:13.378871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 01:10:13.378893 | orchestrator | 2026-03-17 01:10:13.378900 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-17 01:10:13.378906 | orchestrator | Tuesday 17 March 2026 01:08:16 +0000 (0:00:00.614) 0:00:01.928 ********* 2026-03-17 01:10:13.378912 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-03-17 01:10:13.378919 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-03-17 01:10:13.378926 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 01:10:13.378932 | orchestrator | 2026-03-17 01:10:13.378938 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-17 01:10:13.378944 | orchestrator | Tuesday 17 March 2026 01:08:17 +0000 (0:00:00.784) 0:00:02.713 ********* 2026-03-17 01:10:13.378951 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:10:13.378957 | orchestrator | 2026-03-17 01:10:13.378963 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-17 01:10:13.378969 | orchestrator | Tuesday 17 March 2026 01:08:17 +0000 (0:00:00.684) 0:00:03.397 ********* 2026-03-17 01:10:13.378986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 01:10:13.378996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 01:10:13.379003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 01:10:13.379009 | orchestrator | 2026-03-17 01:10:13.379015 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-17 01:10:13.379022 | orchestrator | Tuesday 17 March 2026 01:08:19 +0000 (0:00:01.347) 0:00:04.745 ********* 2026-03-17 01:10:13.379028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-17 01:10:13.379040 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:13.379047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-17 01:10:13.379053 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:13.379067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-17 01:10:13.379074 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:13.379080 | orchestrator | 2026-03-17 01:10:13.379087 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-17 01:10:13.379093 | orchestrator | Tuesday 17 March 2026 01:08:19 +0000 (0:00:00.499) 0:00:05.245 ********* 2026-03-17 01:10:13.379099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-17 01:10:13.379520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-17 01:10:13.379548 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:13.379559 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:13.379580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-17 01:10:13.379590 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:13.379601 | orchestrator | 2026-03-17 01:10:13.379612 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-17 01:10:13.379623 | orchestrator | Tuesday 17 March 2026 01:08:20 +0000 (0:00:00.953) 0:00:06.198 ********* 2026-03-17 01:10:13.379635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 01:10:13.379678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 01:10:13.379687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 01:10:13.379693 | orchestrator | 2026-03-17 01:10:13.379700 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-17 01:10:13.379706 | orchestrator | Tuesday 17 March 2026 01:08:21 +0000 (0:00:01.325) 0:00:07.524 ********* 2026-03-17 01:10:13.379718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 01:10:13.379735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 01:10:13.379742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 01:10:13.379748 | orchestrator | 2026-03-17 01:10:13.379754 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-17 01:10:13.379761 | orchestrator | Tuesday 17 March 2026 01:08:23 +0000 (0:00:01.222) 0:00:08.746 ********* 2026-03-17 01:10:13.379767 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:13.379773 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:13.379779 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:13.379785 | orchestrator | 2026-03-17 01:10:13.379792 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-17 01:10:13.379798 | orchestrator | Tuesday 17 March 2026 01:08:23 +0000 (0:00:00.456) 0:00:09.203 ********* 2026-03-17 01:10:13.379804 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-17 01:10:13.379810 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-17 01:10:13.379817 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-17 01:10:13.379823 | orchestrator | 2026-03-17 01:10:13.379829 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-17 01:10:13.379835 | orchestrator | Tuesday 17 March 2026 01:08:24 +0000 (0:00:01.202) 0:00:10.406 ********* 2026-03-17 01:10:13.379841 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-17 01:10:13.379865 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-17 01:10:13.379872 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-17 01:10:13.379894 | orchestrator | 2026-03-17 01:10:13.379900 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-03-17 01:10:13.379907 | orchestrator | Tuesday 17 March 2026 01:08:25 +0000 (0:00:01.053) 0:00:11.460 ********* 2026-03-17 01:10:13.379913 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 01:10:13.379919 | orchestrator | 2026-03-17 01:10:13.379925 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-03-17 01:10:13.379931 | orchestrator | Tuesday 17 March 2026 01:08:26 +0000 (0:00:00.639) 0:00:12.100 ********* 2026-03-17 01:10:13.379937 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-03-17 01:10:13.379944 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-03-17 01:10:13.379954 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:10:13.379960 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:10:13.379966 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:10:13.379973 | orchestrator | 2026-03-17 01:10:13.379981 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-03-17 01:10:13.379991 | orchestrator | Tuesday 17 March 2026 01:08:27 +0000 (0:00:00.624) 0:00:12.724 ********* 2026-03-17 01:10:13.380005 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:13.380017 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:13.380027 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:13.380037 | orchestrator | 2026-03-17 01:10:13.380047 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-17 01:10:13.380057 | orchestrator | Tuesday 17 March 2026 01:08:27 +0000 (0:00:00.378) 0:00:13.103 ********* 2026-03-17 01:10:13.380073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1078089, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.020922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1078089, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.020922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1078089, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.020922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1078127, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0379398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1078127, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0379398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1078127, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0379398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1078097, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.027908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1078097, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.027908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1078097, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.027908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1078131, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0411396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1078131, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0411396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1078131, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0411396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1078110, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0309222, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1078110, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0309222, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1078110, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0309222, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1078120, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0359223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1078120, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0359223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1078120, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0359223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1078087, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0185368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1078087, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0185368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1078087, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0185368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1078093, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.023922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1078093, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.023922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1078093, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.023922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1078101, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0283887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1078101, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0283887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1078101, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0283887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1078115, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0336628, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1078115, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0336628, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1078115, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0336628, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1078125, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0372655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1078125, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0372655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1078125, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0372655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1078094, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.025922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1078094, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.025922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1078094, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.025922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1078119, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0350938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1078119, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0350938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1078119, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0350938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1078111, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.032922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1078111, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.032922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1078111, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.032922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1078107, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0309222, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1078107, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0309222, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1078107, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0309222, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1078105, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.029922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1078105, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.029922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1078105, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.029922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1078116, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0341418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1078116, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0341418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1078116, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0341418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1078103, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.028922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1078103, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.028922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1078103, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.028922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1078122, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0359223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1078122, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0359223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1078122, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0359223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1078205, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0789227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1078205, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0789227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1078205, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0789227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1078150, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0539224, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1078150, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0539224, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1078150, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0539224, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1078143, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0449224, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1078143, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0449224, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1078143, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0449224, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.380999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1078158, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0574472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1078158, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0574472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1078158, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0574472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1078137, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0419612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1078137, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0419612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1078137, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0419612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1078177, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0683486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1078177, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0683486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1078177, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0683486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1078160, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0661356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1078160, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0661356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1078160, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0661356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1078181, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0705535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1078181, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0705535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1078181, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0705535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1078203, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.076923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1078203, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.076923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1078203, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.076923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1078175, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0669227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1078175, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0669227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1078175, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0669227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1078153, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0549226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1078153, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0549226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1078153, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0549226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1078148, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0489223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1078148, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0489223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1078148, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0489223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1078151, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0549226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1078151, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0549226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1078151, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0549226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1078146, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0469224, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1078146, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0469224, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1078146, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0469224, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1078154, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0574472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1078154, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0574472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1078154, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0574472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1078193, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0759227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1078193, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0759227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1078193, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0759227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1078188, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0719228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1078188, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0719228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1078188, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0719228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1078138, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0429833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1078138, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0429833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1078138, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0429833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1078141, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.044159, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1078141, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.044159, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1078141, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.044159, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1078173, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0669227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1078173, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0669227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1078173, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0669227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1078186, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0708466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1078186, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0708466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1078186, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773706838.0708466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-17 01:10:13.381700 | orchestrator | 2026-03-17 01:10:13.381710 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-03-17 01:10:13.381719 | orchestrator | Tuesday 17 March 2026 01:09:01 +0000 (0:00:34.049) 0:00:47.152 ********* 2026-03-17 01:10:13.381728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 01:10:13.381737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 01:10:13.381746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-17 01:10:13.381755 | orchestrator | 2026-03-17 01:10:13.381763 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-17 01:10:13.381772 | orchestrator | Tuesday 17 March 2026 01:09:02 +0000 (0:00:00.920) 0:00:48.073 ********* 2026-03-17 01:10:13.381781 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:10:13.381791 | orchestrator | 2026-03-17 01:10:13.381839 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-17 01:10:13.381856 | orchestrator | Tuesday 17 March 2026 01:09:04 +0000 (0:00:02.199) 0:00:50.272 ********* 2026-03-17 01:10:13.381866 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:10:13.381876 | orchestrator | 2026-03-17 01:10:13.381886 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-17 01:10:13.381895 | orchestrator | Tuesday 17 March 2026 01:09:06 +0000 (0:00:02.341) 0:00:52.613 ********* 2026-03-17 01:10:13.381905 | orchestrator | 2026-03-17 01:10:13.381914 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-17 01:10:13.381925 | orchestrator | Tuesday 17 March 2026 01:09:06 +0000 (0:00:00.059) 0:00:52.673 ********* 2026-03-17 01:10:13.381936 | orchestrator | 2026-03-17 01:10:13.381942 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-17 01:10:13.381948 | orchestrator | Tuesday 17 March 2026 01:09:07 +0000 (0:00:00.061) 0:00:52.734 ********* 2026-03-17 01:10:13.381954 | orchestrator | 2026-03-17 01:10:13.381959 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-17 01:10:13.381965 | orchestrator | Tuesday 17 March 2026 01:09:07 +0000 (0:00:00.168) 0:00:52.903 ********* 2026-03-17 01:10:13.381971 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:13.381977 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:13.381983 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:10:13.382066 | orchestrator | 2026-03-17 01:10:13.382078 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-17 01:10:13.382088 | orchestrator | Tuesday 17 March 2026 01:09:09 +0000 (0:00:01.957) 0:00:54.860 ********* 2026-03-17 01:10:13.382098 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:13.382108 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:13.382119 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-17 01:10:13.382136 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-03-17 01:10:13.382148 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-03-17 01:10:13.382175 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:10:13.382186 | orchestrator | 2026-03-17 01:10:13.382195 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-17 01:10:13.382206 | orchestrator | Tuesday 17 March 2026 01:09:47 +0000 (0:00:38.443) 0:01:33.304 ********* 2026-03-17 01:10:13.382216 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:13.382226 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:10:13.382232 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:10:13.382238 | orchestrator | 2026-03-17 01:10:13.382244 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-17 01:10:13.382250 | orchestrator | Tuesday 17 March 2026 01:10:08 +0000 (0:00:20.721) 0:01:54.026 ********* 2026-03-17 01:10:13.382255 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:10:13.382261 | orchestrator | 2026-03-17 01:10:13.382267 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-17 01:10:13.382273 | orchestrator | Tuesday 17 March 2026 01:10:10 +0000 (0:00:01.873) 0:01:55.899 ********* 2026-03-17 01:10:13.382278 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:13.382284 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:13.382290 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:13.382296 | orchestrator | 2026-03-17 01:10:13.382309 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-17 01:10:13.382315 | orchestrator | Tuesday 17 March 2026 01:10:10 +0000 (0:00:00.475) 0:01:56.375 ********* 2026-03-17 01:10:13.382327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-17 01:10:13.382335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-17 01:10:13.382343 | orchestrator | 2026-03-17 01:10:13.382353 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-17 01:10:13.382365 | orchestrator | Tuesday 17 March 2026 01:10:12 +0000 (0:00:01.969) 0:01:58.345 ********* 2026-03-17 01:10:13.382378 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:13.382395 | orchestrator | 2026-03-17 01:10:13.382404 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:10:13.382414 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 01:10:13.382426 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 01:10:13.382436 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 01:10:13.382445 | orchestrator | 2026-03-17 01:10:13.382455 | orchestrator | 2026-03-17 01:10:13.382465 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:10:13.382475 | orchestrator | Tuesday 17 March 2026 01:10:12 +0000 (0:00:00.223) 0:01:58.568 ********* 2026-03-17 01:10:13.382485 | orchestrator | =============================================================================== 2026-03-17 01:10:13.382505 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.44s 2026-03-17 01:10:13.382515 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 34.05s 2026-03-17 01:10:13.382524 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 20.72s 2026-03-17 01:10:13.382536 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.34s 2026-03-17 01:10:13.382546 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.20s 2026-03-17 01:10:13.382556 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 1.97s 2026-03-17 01:10:13.382567 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.96s 2026-03-17 01:10:13.382578 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 1.87s 2026-03-17 01:10:13.382590 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.35s 2026-03-17 01:10:13.382601 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.33s 2026-03-17 01:10:13.382612 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.22s 2026-03-17 01:10:13.382623 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.20s 2026-03-17 01:10:13.382634 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.05s 2026-03-17 01:10:13.382645 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.95s 2026-03-17 01:10:13.382655 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.92s 2026-03-17 01:10:13.382666 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.78s 2026-03-17 01:10:13.382681 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.68s 2026-03-17 01:10:13.382692 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.64s 2026-03-17 01:10:13.382702 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.62s 2026-03-17 01:10:13.382713 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.61s 2026-03-17 01:10:13.382723 | orchestrator | 2026-03-17 01:10:13 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:10:13.382734 | orchestrator | 2026-03-17 01:10:13 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:10:13.382744 | orchestrator | 2026-03-17 01:10:13 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:16.417968 | orchestrator | 2026-03-17 01:10:16 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:10:16.419572 | orchestrator | 2026-03-17 01:10:16 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:10:16.419665 | orchestrator | 2026-03-17 01:10:16 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:19.463053 | orchestrator | 2026-03-17 01:10:19 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:10:19.464864 | orchestrator | 2026-03-17 01:10:19 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state STARTED 2026-03-17 01:10:19.465073 | orchestrator | 2026-03-17 01:10:19 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:22.516462 | orchestrator | 2026-03-17 01:10:22 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:10:22.517806 | orchestrator | 2026-03-17 01:10:22 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:10:22.518945 | orchestrator | 2026-03-17 01:10:22 | INFO  | Task a66bd75d-c10e-450b-bf72-e73e8dc28ebf is in state SUCCESS 2026-03-17 01:10:22.521242 | orchestrator | 2026-03-17 01:10:22 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:25.552735 | orchestrator | 2026-03-17 01:10:25 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:10:25.555056 | orchestrator | 2026-03-17 01:10:25 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:10:25.555109 | orchestrator | 2026-03-17 01:10:25 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:28.588055 | orchestrator | 2026-03-17 01:10:28 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:10:28.590539 | orchestrator | 2026-03-17 01:10:28 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:10:28.590594 | orchestrator | 2026-03-17 01:10:28 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:31.615358 | orchestrator | 2026-03-17 01:10:31 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:10:31.615970 | orchestrator | 2026-03-17 01:10:31 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:10:31.615988 | orchestrator | 2026-03-17 01:10:31 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:34.649609 | orchestrator | 2026-03-17 01:10:34 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:10:34.649722 | orchestrator | 2026-03-17 01:10:34 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:10:34.649736 | orchestrator | 2026-03-17 01:10:34 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:37.685448 | orchestrator | 2026-03-17 01:10:37 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:10:37.686875 | orchestrator | 2026-03-17 01:10:37 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:10:37.687030 | orchestrator | 2026-03-17 01:10:37 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:40.730055 | orchestrator | 2026-03-17 01:10:40 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:10:40.731517 | orchestrator | 2026-03-17 01:10:40 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:10:40.731584 | orchestrator | 2026-03-17 01:10:40 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:43.769235 | orchestrator | 2026-03-17 01:10:43 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:10:43.769413 | orchestrator | 2026-03-17 01:10:43 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:10:43.769442 | orchestrator | 2026-03-17 01:10:43 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:46.803058 | orchestrator | 2026-03-17 01:10:46 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:10:46.804971 | orchestrator | 2026-03-17 01:10:46 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:10:46.805016 | orchestrator | 2026-03-17 01:10:46 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:49.838771 | orchestrator | 2026-03-17 01:10:49 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:10:49.840989 | orchestrator | 2026-03-17 01:10:49 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:10:49.841046 | orchestrator | 2026-03-17 01:10:49 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:52.881775 | orchestrator | 2026-03-17 01:10:52 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:10:52.881835 | orchestrator | 2026-03-17 01:10:52 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:10:52.881844 | orchestrator | 2026-03-17 01:10:52 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:55.915623 | orchestrator | 2026-03-17 01:10:55 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:10:55.917504 | orchestrator | 2026-03-17 01:10:55 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:10:55.917550 | orchestrator | 2026-03-17 01:10:55 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:58.952659 | orchestrator | 2026-03-17 01:10:58 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:10:58.952716 | orchestrator | 2026-03-17 01:10:58 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:10:58.952724 | orchestrator | 2026-03-17 01:10:58 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:01.987818 | orchestrator | 2026-03-17 01:11:01 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:11:01.987903 | orchestrator | 2026-03-17 01:11:01 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:11:01.987912 | orchestrator | 2026-03-17 01:11:01 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:05.032844 | orchestrator | 2026-03-17 01:11:05 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:11:05.039169 | orchestrator | 2026-03-17 01:11:05 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:11:05.039227 | orchestrator | 2026-03-17 01:11:05 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:08.077341 | orchestrator | 2026-03-17 01:11:08 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:11:08.078621 | orchestrator | 2026-03-17 01:11:08 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:11:08.078904 | orchestrator | 2026-03-17 01:11:08 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:11.139404 | orchestrator | 2026-03-17 01:11:11 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:11:11.139468 | orchestrator | 2026-03-17 01:11:11 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:11:11.139476 | orchestrator | 2026-03-17 01:11:11 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:14.166642 | orchestrator | 2026-03-17 01:11:14 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:11:14.167131 | orchestrator | 2026-03-17 01:11:14 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:11:14.167464 | orchestrator | 2026-03-17 01:11:14 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:17.192667 | orchestrator | 2026-03-17 01:11:17 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:11:17.194184 | orchestrator | 2026-03-17 01:11:17 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:11:17.194245 | orchestrator | 2026-03-17 01:11:17 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:20.224448 | orchestrator | 2026-03-17 01:11:20 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:11:20.227996 | orchestrator | 2026-03-17 01:11:20 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:11:20.228129 | orchestrator | 2026-03-17 01:11:20 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:23.276181 | orchestrator | 2026-03-17 01:11:23 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:11:23.276661 | orchestrator | 2026-03-17 01:11:23 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:11:23.278399 | orchestrator | 2026-03-17 01:11:23 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:26.339538 | orchestrator | 2026-03-17 01:11:26 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:11:26.342133 | orchestrator | 2026-03-17 01:11:26 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:11:26.342518 | orchestrator | 2026-03-17 01:11:26 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:29.388773 | orchestrator | 2026-03-17 01:11:29 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:11:29.389329 | orchestrator | 2026-03-17 01:11:29 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:11:29.389362 | orchestrator | 2026-03-17 01:11:29 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:32.434206 | orchestrator | 2026-03-17 01:11:32 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:11:32.437089 | orchestrator | 2026-03-17 01:11:32 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:11:32.437152 | orchestrator | 2026-03-17 01:11:32 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:35.479365 | orchestrator | 2026-03-17 01:11:35 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:11:35.479455 | orchestrator | 2026-03-17 01:11:35 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:11:35.479464 | orchestrator | 2026-03-17 01:11:35 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:38.519914 | orchestrator | 2026-03-17 01:11:38 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:11:38.521743 | orchestrator | 2026-03-17 01:11:38 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:11:38.521834 | orchestrator | 2026-03-17 01:11:38 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:41.562006 | orchestrator | 2026-03-17 01:11:41 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:11:41.563513 | orchestrator | 2026-03-17 01:11:41 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:11:41.563608 | orchestrator | 2026-03-17 01:11:41 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:44.621978 | orchestrator | 2026-03-17 01:11:44 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:11:44.622074 | orchestrator | 2026-03-17 01:11:44 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:11:44.622094 | orchestrator | 2026-03-17 01:11:44 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:47.669608 | orchestrator | 2026-03-17 01:11:47 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:11:47.670953 | orchestrator | 2026-03-17 01:11:47 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:11:47.671007 | orchestrator | 2026-03-17 01:11:47 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:50.712790 | orchestrator | 2026-03-17 01:11:50 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:11:50.715156 | orchestrator | 2026-03-17 01:11:50 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:11:50.715203 | orchestrator | 2026-03-17 01:11:50 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:53.759812 | orchestrator | 2026-03-17 01:11:53 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:11:53.759870 | orchestrator | 2026-03-17 01:11:53 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:11:53.759878 | orchestrator | 2026-03-17 01:11:53 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:56.812506 | orchestrator | 2026-03-17 01:11:56 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:11:56.814779 | orchestrator | 2026-03-17 01:11:56 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:11:56.814826 | orchestrator | 2026-03-17 01:11:56 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:59.850701 | orchestrator | 2026-03-17 01:11:59 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:11:59.851716 | orchestrator | 2026-03-17 01:11:59 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:11:59.851761 | orchestrator | 2026-03-17 01:11:59 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:02.882078 | orchestrator | 2026-03-17 01:12:02 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:12:02.882971 | orchestrator | 2026-03-17 01:12:02 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:12:02.883003 | orchestrator | 2026-03-17 01:12:02 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:05.929123 | orchestrator | 2026-03-17 01:12:05 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:12:05.931123 | orchestrator | 2026-03-17 01:12:05 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:12:05.931176 | orchestrator | 2026-03-17 01:12:05 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:08.971471 | orchestrator | 2026-03-17 01:12:08 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:12:08.974478 | orchestrator | 2026-03-17 01:12:08 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:12:08.974544 | orchestrator | 2026-03-17 01:12:08 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:12.017427 | orchestrator | 2026-03-17 01:12:12 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:12:12.018929 | orchestrator | 2026-03-17 01:12:12 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:12:12.018975 | orchestrator | 2026-03-17 01:12:12 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:15.068577 | orchestrator | 2026-03-17 01:12:15 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:12:15.068647 | orchestrator | 2026-03-17 01:12:15 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:12:15.068656 | orchestrator | 2026-03-17 01:12:15 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:18.113757 | orchestrator | 2026-03-17 01:12:18 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:12:18.115129 | orchestrator | 2026-03-17 01:12:18 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:12:18.115187 | orchestrator | 2026-03-17 01:12:18 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:21.149965 | orchestrator | 2026-03-17 01:12:21 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:12:21.151254 | orchestrator | 2026-03-17 01:12:21 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:12:21.151300 | orchestrator | 2026-03-17 01:12:21 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:24.197110 | orchestrator | 2026-03-17 01:12:24 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:12:24.200385 | orchestrator | 2026-03-17 01:12:24 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:12:24.201499 | orchestrator | 2026-03-17 01:12:24 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:27.254792 | orchestrator | 2026-03-17 01:12:27 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:12:27.255400 | orchestrator | 2026-03-17 01:12:27 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:12:27.255427 | orchestrator | 2026-03-17 01:12:27 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:30.304963 | orchestrator | 2026-03-17 01:12:30 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:12:30.307072 | orchestrator | 2026-03-17 01:12:30 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:12:30.307126 | orchestrator | 2026-03-17 01:12:30 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:33.350791 | orchestrator | 2026-03-17 01:12:33 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:12:33.351634 | orchestrator | 2026-03-17 01:12:33 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:12:33.351674 | orchestrator | 2026-03-17 01:12:33 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:36.395828 | orchestrator | 2026-03-17 01:12:36 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:12:36.397861 | orchestrator | 2026-03-17 01:12:36 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:12:36.397908 | orchestrator | 2026-03-17 01:12:36 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:39.442894 | orchestrator | 2026-03-17 01:12:39 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:12:39.444615 | orchestrator | 2026-03-17 01:12:39 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:12:39.444665 | orchestrator | 2026-03-17 01:12:39 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:42.496451 | orchestrator | 2026-03-17 01:12:42 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:12:42.498157 | orchestrator | 2026-03-17 01:12:42 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:12:42.498190 | orchestrator | 2026-03-17 01:12:42 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:45.541726 | orchestrator | 2026-03-17 01:12:45 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:12:45.543314 | orchestrator | 2026-03-17 01:12:45 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:12:45.543354 | orchestrator | 2026-03-17 01:12:45 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:48.578629 | orchestrator | 2026-03-17 01:12:48 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:12:48.580285 | orchestrator | 2026-03-17 01:12:48 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:12:48.580333 | orchestrator | 2026-03-17 01:12:48 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:51.618166 | orchestrator | 2026-03-17 01:12:51 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:12:51.618671 | orchestrator | 2026-03-17 01:12:51 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:12:51.618689 | orchestrator | 2026-03-17 01:12:51 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:54.654984 | orchestrator | 2026-03-17 01:12:54 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:12:54.656773 | orchestrator | 2026-03-17 01:12:54 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:12:54.656820 | orchestrator | 2026-03-17 01:12:54 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:57.702726 | orchestrator | 2026-03-17 01:12:57 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:12:57.705486 | orchestrator | 2026-03-17 01:12:57 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:12:57.705687 | orchestrator | 2026-03-17 01:12:57 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:00.746988 | orchestrator | 2026-03-17 01:13:00 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:13:00.748321 | orchestrator | 2026-03-17 01:13:00 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:13:00.748358 | orchestrator | 2026-03-17 01:13:00 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:03.784533 | orchestrator | 2026-03-17 01:13:03 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:13:03.787114 | orchestrator | 2026-03-17 01:13:03 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:13:03.787166 | orchestrator | 2026-03-17 01:13:03 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:06.828124 | orchestrator | 2026-03-17 01:13:06 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:13:06.828798 | orchestrator | 2026-03-17 01:13:06 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:13:06.828829 | orchestrator | 2026-03-17 01:13:06 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:09.873439 | orchestrator | 2026-03-17 01:13:09 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:13:09.875442 | orchestrator | 2026-03-17 01:13:09 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:13:09.875496 | orchestrator | 2026-03-17 01:13:09 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:12.909130 | orchestrator | 2026-03-17 01:13:12 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:13:12.910395 | orchestrator | 2026-03-17 01:13:12 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:13:12.910466 | orchestrator | 2026-03-17 01:13:12 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:15.936569 | orchestrator | 2026-03-17 01:13:15 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:13:15.936936 | orchestrator | 2026-03-17 01:13:15 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:13:15.937026 | orchestrator | 2026-03-17 01:13:15 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:18.978469 | orchestrator | 2026-03-17 01:13:18 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:13:18.979046 | orchestrator | 2026-03-17 01:13:18 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:13:18.979085 | orchestrator | 2026-03-17 01:13:18 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:22.011712 | orchestrator | 2026-03-17 01:13:22 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:13:22.014309 | orchestrator | 2026-03-17 01:13:22 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:13:22.014382 | orchestrator | 2026-03-17 01:13:22 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:25.044087 | orchestrator | 2026-03-17 01:13:25 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:13:25.044639 | orchestrator | 2026-03-17 01:13:25 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:13:25.044669 | orchestrator | 2026-03-17 01:13:25 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:28.084364 | orchestrator | 2026-03-17 01:13:28 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:13:28.086284 | orchestrator | 2026-03-17 01:13:28 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:13:28.086342 | orchestrator | 2026-03-17 01:13:28 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:31.114181 | orchestrator | 2026-03-17 01:13:31 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:13:31.115335 | orchestrator | 2026-03-17 01:13:31 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:13:31.115615 | orchestrator | 2026-03-17 01:13:31 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:34.152057 | orchestrator | 2026-03-17 01:13:34 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:13:34.153504 | orchestrator | 2026-03-17 01:13:34 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:13:34.153552 | orchestrator | 2026-03-17 01:13:34 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:37.179813 | orchestrator | 2026-03-17 01:13:37 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:13:37.180262 | orchestrator | 2026-03-17 01:13:37 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:13:37.181114 | orchestrator | 2026-03-17 01:13:37 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:40.201776 | orchestrator | 2026-03-17 01:13:40 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:13:40.202283 | orchestrator | 2026-03-17 01:13:40 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:13:40.202310 | orchestrator | 2026-03-17 01:13:40 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:43.228910 | orchestrator | 2026-03-17 01:13:43 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:13:43.228970 | orchestrator | 2026-03-17 01:13:43 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:13:43.228975 | orchestrator | 2026-03-17 01:13:43 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:46.263301 | orchestrator | 2026-03-17 01:13:46 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:13:46.264354 | orchestrator | 2026-03-17 01:13:46 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:13:46.264382 | orchestrator | 2026-03-17 01:13:46 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:49.306932 | orchestrator | 2026-03-17 01:13:49 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:13:49.310946 | orchestrator | 2026-03-17 01:13:49 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:13:49.311005 | orchestrator | 2026-03-17 01:13:49 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:52.356065 | orchestrator | 2026-03-17 01:13:52 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:13:52.359785 | orchestrator | 2026-03-17 01:13:52 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:13:52.359828 | orchestrator | 2026-03-17 01:13:52 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:55.404694 | orchestrator | 2026-03-17 01:13:55 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:13:55.406223 | orchestrator | 2026-03-17 01:13:55 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:13:55.406281 | orchestrator | 2026-03-17 01:13:55 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:58.452098 | orchestrator | 2026-03-17 01:13:58 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:13:58.453404 | orchestrator | 2026-03-17 01:13:58 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:13:58.453448 | orchestrator | 2026-03-17 01:13:58 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:01.492536 | orchestrator | 2026-03-17 01:14:01 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:14:01.494356 | orchestrator | 2026-03-17 01:14:01 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:14:01.495924 | orchestrator | 2026-03-17 01:14:01 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:04.539147 | orchestrator | 2026-03-17 01:14:04 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:14:04.540443 | orchestrator | 2026-03-17 01:14:04 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:14:04.540489 | orchestrator | 2026-03-17 01:14:04 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:07.588070 | orchestrator | 2026-03-17 01:14:07 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:14:07.589846 | orchestrator | 2026-03-17 01:14:07 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:14:07.589891 | orchestrator | 2026-03-17 01:14:07 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:10.630897 | orchestrator | 2026-03-17 01:14:10 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:14:10.632443 | orchestrator | 2026-03-17 01:14:10 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:14:10.632581 | orchestrator | 2026-03-17 01:14:10 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:13.666697 | orchestrator | 2026-03-17 01:14:13 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:14:13.669303 | orchestrator | 2026-03-17 01:14:13 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:14:13.669343 | orchestrator | 2026-03-17 01:14:13 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:16.704723 | orchestrator | 2026-03-17 01:14:16 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:14:16.707634 | orchestrator | 2026-03-17 01:14:16 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state STARTED 2026-03-17 01:14:16.707686 | orchestrator | 2026-03-17 01:14:16 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:19.777064 | orchestrator | 2026-03-17 01:14:19 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:14:19.782957 | orchestrator | 2026-03-17 01:14:19 | INFO  | Task bacfe409-12be-42eb-8a03-5371a7e815f5 is in state SUCCESS 2026-03-17 01:14:19.785235 | orchestrator | 2026-03-17 01:14:19.785282 | orchestrator | 2026-03-17 01:14:19.785287 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:14:19.785292 | orchestrator | 2026-03-17 01:14:19.785295 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:14:19.785298 | orchestrator | Tuesday 17 March 2026 01:07:57 +0000 (0:00:00.133) 0:00:00.133 ********* 2026-03-17 01:14:19.785302 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:14:19.785306 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:14:19.785309 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:14:19.785314 | orchestrator | 2026-03-17 01:14:19.785319 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:14:19.785323 | orchestrator | Tuesday 17 March 2026 01:07:57 +0000 (0:00:00.265) 0:00:00.398 ********* 2026-03-17 01:14:19.785329 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-03-17 01:14:19.785334 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-03-17 01:14:19.785339 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-03-17 01:14:19.785344 | orchestrator | 2026-03-17 01:14:19.785348 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-03-17 01:14:19.785353 | orchestrator | 2026-03-17 01:14:19.785369 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-03-17 01:14:19.785373 | orchestrator | Tuesday 17 March 2026 01:07:58 +0000 (0:00:00.567) 0:00:00.966 ********* 2026-03-17 01:14:19.785376 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:14:19.785380 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:14:19.785383 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:14:19.785386 | orchestrator | 2026-03-17 01:14:19.785389 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:14:19.785393 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:14:19.785397 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:14:19.785401 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:14:19.785414 | orchestrator | 2026-03-17 01:14:19.785424 | orchestrator | 2026-03-17 01:14:19.785430 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:14:19.785435 | orchestrator | Tuesday 17 March 2026 01:10:19 +0000 (0:02:21.819) 0:02:22.785 ********* 2026-03-17 01:14:19.785441 | orchestrator | =============================================================================== 2026-03-17 01:14:19.785447 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 141.82s 2026-03-17 01:14:19.785452 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.57s 2026-03-17 01:14:19.785458 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2026-03-17 01:14:19.785475 | orchestrator | 2026-03-17 01:14:19.785479 | orchestrator | 2026-03-17 01:14:19.785482 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:14:19.785485 | orchestrator | 2026-03-17 01:14:19.785488 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-17 01:14:19.785491 | orchestrator | Tuesday 17 March 2026 01:06:22 +0000 (0:00:00.342) 0:00:00.342 ********* 2026-03-17 01:14:19.785494 | orchestrator | changed: [testbed-manager] 2026-03-17 01:14:19.785498 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:19.785501 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:14:19.785504 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:14:19.785507 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:14:19.785510 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:14:19.785514 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:14:19.785517 | orchestrator | 2026-03-17 01:14:19.785520 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:14:19.785523 | orchestrator | Tuesday 17 March 2026 01:06:23 +0000 (0:00:00.975) 0:00:01.318 ********* 2026-03-17 01:14:19.785526 | orchestrator | changed: [testbed-manager] 2026-03-17 01:14:19.785529 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:19.785532 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:14:19.785535 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:14:19.785538 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:14:19.785541 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:14:19.785544 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:14:19.785547 | orchestrator | 2026-03-17 01:14:19.785550 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:14:19.785553 | orchestrator | Tuesday 17 March 2026 01:06:23 +0000 (0:00:00.766) 0:00:02.084 ********* 2026-03-17 01:14:19.785556 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-17 01:14:19.785560 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-17 01:14:19.785563 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-17 01:14:19.785566 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-17 01:14:19.785569 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-17 01:14:19.785572 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-17 01:14:19.785575 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-17 01:14:19.785578 | orchestrator | 2026-03-17 01:14:19.785581 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-17 01:14:19.785584 | orchestrator | 2026-03-17 01:14:19.785587 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-17 01:14:19.785590 | orchestrator | Tuesday 17 March 2026 01:06:24 +0000 (0:00:01.075) 0:00:03.159 ********* 2026-03-17 01:14:19.785593 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:14:19.785596 | orchestrator | 2026-03-17 01:14:19.785599 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-17 01:14:19.785603 | orchestrator | Tuesday 17 March 2026 01:06:25 +0000 (0:00:00.641) 0:00:03.801 ********* 2026-03-17 01:14:19.785606 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-17 01:14:19.785618 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-17 01:14:19.785622 | orchestrator | 2026-03-17 01:14:19.785625 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-17 01:14:19.785628 | orchestrator | Tuesday 17 March 2026 01:06:29 +0000 (0:00:04.021) 0:00:07.823 ********* 2026-03-17 01:14:19.785631 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-17 01:14:19.785634 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-17 01:14:19.785637 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:19.785640 | orchestrator | 2026-03-17 01:14:19.785644 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-17 01:14:19.785650 | orchestrator | Tuesday 17 March 2026 01:06:33 +0000 (0:00:04.042) 0:00:11.866 ********* 2026-03-17 01:14:19.785653 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:19.785656 | orchestrator | 2026-03-17 01:14:19.785659 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-17 01:14:19.785662 | orchestrator | Tuesday 17 March 2026 01:06:34 +0000 (0:00:00.793) 0:00:12.660 ********* 2026-03-17 01:14:19.785665 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:19.785668 | orchestrator | 2026-03-17 01:14:19.785673 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-17 01:14:19.785676 | orchestrator | Tuesday 17 March 2026 01:06:35 +0000 (0:00:01.558) 0:00:14.218 ********* 2026-03-17 01:14:19.785680 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:19.785683 | orchestrator | 2026-03-17 01:14:19.785686 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-17 01:14:19.785689 | orchestrator | Tuesday 17 March 2026 01:06:38 +0000 (0:00:02.298) 0:00:16.516 ********* 2026-03-17 01:14:19.785692 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.785695 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.785698 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.785701 | orchestrator | 2026-03-17 01:14:19.785704 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-17 01:14:19.785707 | orchestrator | Tuesday 17 March 2026 01:06:39 +0000 (0:00:00.793) 0:00:17.310 ********* 2026-03-17 01:14:19.785710 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:14:19.785713 | orchestrator | 2026-03-17 01:14:19.785717 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-17 01:14:19.785720 | orchestrator | Tuesday 17 March 2026 01:07:11 +0000 (0:00:32.860) 0:00:50.171 ********* 2026-03-17 01:14:19.785723 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:19.785726 | orchestrator | 2026-03-17 01:14:19.785730 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-17 01:14:19.785735 | orchestrator | Tuesday 17 March 2026 01:07:27 +0000 (0:00:15.536) 0:01:05.708 ********* 2026-03-17 01:14:19.785742 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:14:19.785749 | orchestrator | 2026-03-17 01:14:19.785754 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-17 01:14:19.785759 | orchestrator | Tuesday 17 March 2026 01:07:40 +0000 (0:00:13.156) 0:01:18.864 ********* 2026-03-17 01:14:19.785763 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:14:19.785768 | orchestrator | 2026-03-17 01:14:19.785772 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-17 01:14:19.785777 | orchestrator | Tuesday 17 March 2026 01:07:41 +0000 (0:00:00.942) 0:01:19.807 ********* 2026-03-17 01:14:19.785781 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.785786 | orchestrator | 2026-03-17 01:14:19.786158 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-17 01:14:19.786165 | orchestrator | Tuesday 17 March 2026 01:07:41 +0000 (0:00:00.413) 0:01:20.220 ********* 2026-03-17 01:14:19.786169 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:14:19.786172 | orchestrator | 2026-03-17 01:14:19.786175 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-17 01:14:19.786179 | orchestrator | Tuesday 17 March 2026 01:07:42 +0000 (0:00:00.449) 0:01:20.669 ********* 2026-03-17 01:14:19.786182 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:14:19.786185 | orchestrator | 2026-03-17 01:14:19.786188 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-17 01:14:19.786192 | orchestrator | Tuesday 17 March 2026 01:08:01 +0000 (0:00:18.897) 0:01:39.566 ********* 2026-03-17 01:14:19.786195 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.786198 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.786201 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.786204 | orchestrator | 2026-03-17 01:14:19.786208 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-17 01:14:19.786215 | orchestrator | 2026-03-17 01:14:19.786218 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-17 01:14:19.786222 | orchestrator | Tuesday 17 March 2026 01:08:01 +0000 (0:00:00.305) 0:01:39.872 ********* 2026-03-17 01:14:19.786225 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:14:19.786228 | orchestrator | 2026-03-17 01:14:19.786231 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-17 01:14:19.786234 | orchestrator | Tuesday 17 March 2026 01:08:02 +0000 (0:00:00.486) 0:01:40.359 ********* 2026-03-17 01:14:19.786237 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.786240 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.786243 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:19.786246 | orchestrator | 2026-03-17 01:14:19.786249 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-17 01:14:19.786252 | orchestrator | Tuesday 17 March 2026 01:08:04 +0000 (0:00:02.299) 0:01:42.658 ********* 2026-03-17 01:14:19.786258 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.786264 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.786271 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:19.786277 | orchestrator | 2026-03-17 01:14:19.786282 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-17 01:14:19.786287 | orchestrator | Tuesday 17 March 2026 01:08:06 +0000 (0:00:02.148) 0:01:44.806 ********* 2026-03-17 01:14:19.786292 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.786296 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.786308 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.786313 | orchestrator | 2026-03-17 01:14:19.786318 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-17 01:14:19.786323 | orchestrator | Tuesday 17 March 2026 01:08:06 +0000 (0:00:00.292) 0:01:45.099 ********* 2026-03-17 01:14:19.786329 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-17 01:14:19.786334 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.786339 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-17 01:14:19.786345 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.786350 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-17 01:14:19.786355 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-17 01:14:19.786360 | orchestrator | 2026-03-17 01:14:19.786365 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-17 01:14:19.786370 | orchestrator | Tuesday 17 March 2026 01:08:13 +0000 (0:00:07.098) 0:01:52.197 ********* 2026-03-17 01:14:19.786375 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.786380 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.786385 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.786391 | orchestrator | 2026-03-17 01:14:19.786400 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-17 01:14:19.786406 | orchestrator | Tuesday 17 March 2026 01:08:14 +0000 (0:00:00.380) 0:01:52.577 ********* 2026-03-17 01:14:19.786411 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-17 01:14:19.786416 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.786422 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-17 01:14:19.786427 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.786432 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-17 01:14:19.786437 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.786478 | orchestrator | 2026-03-17 01:14:19.786484 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-17 01:14:19.786489 | orchestrator | Tuesday 17 March 2026 01:08:14 +0000 (0:00:00.586) 0:01:53.163 ********* 2026-03-17 01:14:19.786494 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.786499 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.786504 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:19.786514 | orchestrator | 2026-03-17 01:14:19.786519 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-17 01:14:19.786524 | orchestrator | Tuesday 17 March 2026 01:08:15 +0000 (0:00:00.588) 0:01:53.752 ********* 2026-03-17 01:14:19.786529 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.786534 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.786539 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:19.786544 | orchestrator | 2026-03-17 01:14:19.786550 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-17 01:14:19.786555 | orchestrator | Tuesday 17 March 2026 01:08:16 +0000 (0:00:00.906) 0:01:54.658 ********* 2026-03-17 01:14:19.786560 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.786565 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.786570 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:19.786575 | orchestrator | 2026-03-17 01:14:19.786851 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-17 01:14:19.786864 | orchestrator | Tuesday 17 March 2026 01:08:18 +0000 (0:00:01.997) 0:01:56.656 ********* 2026-03-17 01:14:19.786869 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.786875 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.786880 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:14:19.786885 | orchestrator | 2026-03-17 01:14:19.786890 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-17 01:14:19.786895 | orchestrator | Tuesday 17 March 2026 01:08:40 +0000 (0:00:22.105) 0:02:18.761 ********* 2026-03-17 01:14:19.786898 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.786901 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.786904 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:14:19.786907 | orchestrator | 2026-03-17 01:14:19.786911 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-17 01:14:19.786914 | orchestrator | Tuesday 17 March 2026 01:08:54 +0000 (0:00:13.788) 0:02:32.549 ********* 2026-03-17 01:14:19.786917 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:14:19.786920 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.786923 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.786926 | orchestrator | 2026-03-17 01:14:19.786929 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-17 01:14:19.786932 | orchestrator | Tuesday 17 March 2026 01:08:55 +0000 (0:00:00.835) 0:02:33.385 ********* 2026-03-17 01:14:19.786935 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.786938 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.786941 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:19.786944 | orchestrator | 2026-03-17 01:14:19.786947 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-17 01:14:19.786950 | orchestrator | Tuesday 17 March 2026 01:09:06 +0000 (0:00:11.149) 0:02:44.534 ********* 2026-03-17 01:14:19.786953 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.786956 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.786959 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.786962 | orchestrator | 2026-03-17 01:14:19.786965 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-17 01:14:19.786969 | orchestrator | Tuesday 17 March 2026 01:09:07 +0000 (0:00:00.904) 0:02:45.439 ********* 2026-03-17 01:14:19.786972 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.786975 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.786978 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.786981 | orchestrator | 2026-03-17 01:14:19.786984 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-17 01:14:19.786987 | orchestrator | 2026-03-17 01:14:19.786990 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-17 01:14:19.786993 | orchestrator | Tuesday 17 March 2026 01:09:07 +0000 (0:00:00.394) 0:02:45.834 ********* 2026-03-17 01:14:19.786996 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:14:19.787005 | orchestrator | 2026-03-17 01:14:19.787026 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-03-17 01:14:19.787032 | orchestrator | Tuesday 17 March 2026 01:09:08 +0000 (0:00:00.474) 0:02:46.308 ********* 2026-03-17 01:14:19.787038 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-17 01:14:19.787043 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-17 01:14:19.787049 | orchestrator | 2026-03-17 01:14:19.787055 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-03-17 01:14:19.787060 | orchestrator | Tuesday 17 March 2026 01:09:12 +0000 (0:00:04.339) 0:02:50.648 ********* 2026-03-17 01:14:19.787065 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-17 01:14:19.787071 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-17 01:14:19.787097 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-17 01:14:19.787108 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-17 01:14:19.787113 | orchestrator | 2026-03-17 01:14:19.787118 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-17 01:14:19.787122 | orchestrator | Tuesday 17 March 2026 01:09:18 +0000 (0:00:06.238) 0:02:56.887 ********* 2026-03-17 01:14:19.787127 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-17 01:14:19.787220 | orchestrator | 2026-03-17 01:14:19.787228 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-17 01:14:19.787233 | orchestrator | Tuesday 17 March 2026 01:09:21 +0000 (0:00:02.923) 0:02:59.810 ********* 2026-03-17 01:14:19.787237 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-17 01:14:19.787242 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-17 01:14:19.787247 | orchestrator | 2026-03-17 01:14:19.787251 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-17 01:14:19.787256 | orchestrator | Tuesday 17 March 2026 01:09:24 +0000 (0:00:03.319) 0:03:03.130 ********* 2026-03-17 01:14:19.787260 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-17 01:14:19.787265 | orchestrator | 2026-03-17 01:14:19.787270 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-03-17 01:14:19.787274 | orchestrator | Tuesday 17 March 2026 01:09:27 +0000 (0:00:03.005) 0:03:06.135 ********* 2026-03-17 01:14:19.787279 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-17 01:14:19.787283 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-17 01:14:19.787287 | orchestrator | 2026-03-17 01:14:19.787292 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-17 01:14:19.787297 | orchestrator | Tuesday 17 March 2026 01:09:35 +0000 (0:00:08.078) 0:03:14.214 ********* 2026-03-17 01:14:19.787306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 01:14:19.787511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 01:14:19.787528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 01:14:19.787536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.787542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.787551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.787555 | orchestrator | 2026-03-17 01:14:19.787558 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-17 01:14:19.787561 | orchestrator | Tuesday 17 March 2026 01:09:37 +0000 (0:00:01.181) 0:03:15.395 ********* 2026-03-17 01:14:19.787564 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.787568 | orchestrator | 2026-03-17 01:14:19.787571 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-17 01:14:19.787574 | orchestrator | Tuesday 17 March 2026 01:09:37 +0000 (0:00:00.110) 0:03:15.506 ********* 2026-03-17 01:14:19.787577 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.787580 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.787583 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.787586 | orchestrator | 2026-03-17 01:14:19.787589 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-17 01:14:19.787605 | orchestrator | Tuesday 17 March 2026 01:09:37 +0000 (0:00:00.363) 0:03:15.869 ********* 2026-03-17 01:14:19.787609 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 01:14:19.787612 | orchestrator | 2026-03-17 01:14:19.787616 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-17 01:14:19.787619 | orchestrator | Tuesday 17 March 2026 01:09:38 +0000 (0:00:00.635) 0:03:16.504 ********* 2026-03-17 01:14:19.787622 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.787625 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.787628 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.787631 | orchestrator | 2026-03-17 01:14:19.787634 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-17 01:14:19.787637 | orchestrator | Tuesday 17 March 2026 01:09:38 +0000 (0:00:00.263) 0:03:16.768 ********* 2026-03-17 01:14:19.787641 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:14:19.787647 | orchestrator | 2026-03-17 01:14:19.787652 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-17 01:14:19.787659 | orchestrator | Tuesday 17 March 2026 01:09:38 +0000 (0:00:00.477) 0:03:17.245 ********* 2026-03-17 01:14:19.787665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 01:14:19.787674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 01:14:19.787695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 01:14:19.787705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.787712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.787717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.787726 | orchestrator | 2026-03-17 01:14:19.787733 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-17 01:14:19.787738 | orchestrator | Tuesday 17 March 2026 01:09:41 +0000 (0:00:02.276) 0:03:19.522 ********* 2026-03-17 01:14:19.787744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-17 01:14:19.787764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:14:19.787768 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.787774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-17 01:14:19.787778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:14:19.787784 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.787787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-17 01:14:19.787790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:14:19.787794 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.787797 | orchestrator | 2026-03-17 01:14:19.787800 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-17 01:14:19.787830 | orchestrator | Tuesday 17 March 2026 01:09:41 +0000 (0:00:00.518) 0:03:20.040 ********* 2026-03-17 01:14:19.787836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-17 01:14:19.787839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:14:19.787845 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.787849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-17 01:14:19.787870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:14:19.787874 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.787891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-17 01:14:19.787896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:14:19.787901 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.787904 | orchestrator | 2026-03-17 01:14:19.787907 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-17 01:14:19.787911 | orchestrator | Tuesday 17 March 2026 01:09:42 +0000 (0:00:00.686) 0:03:20.727 ********* 2026-03-17 01:14:19.787914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 01:14:19.787927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 01:14:19.787934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 01:14:19.787940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.787943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.787946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.787950 | orchestrator | 2026-03-17 01:14:19.787953 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-17 01:14:19.787956 | orchestrator | Tuesday 17 March 2026 01:09:44 +0000 (0:00:02.230) 0:03:22.958 ********* 2026-03-17 01:14:19.787969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 01:14:19.787979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 01:14:19.787992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 01:14:19.787999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788033 | orchestrator | 2026-03-17 01:14:19.788040 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-17 01:14:19.788045 | orchestrator | Tuesday 17 March 2026 01:09:49 +0000 (0:00:04.697) 0:03:27.656 ********* 2026-03-17 01:14:19.788050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-17 01:14:19.788056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:14:19.788061 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.788066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-17 01:14:19.788086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:14:19.788093 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.788099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-17 01:14:19.788102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:14:19.788105 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.788108 | orchestrator | 2026-03-17 01:14:19.788112 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-17 01:14:19.788115 | orchestrator | Tuesday 17 March 2026 01:09:50 +0000 (0:00:00.719) 0:03:28.375 ********* 2026-03-17 01:14:19.788118 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:14:19.788121 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:14:19.788124 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:19.788127 | orchestrator | 2026-03-17 01:14:19.788130 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-03-17 01:14:19.788133 | orchestrator | Tuesday 17 March 2026 01:09:51 +0000 (0:00:01.514) 0:03:29.889 ********* 2026-03-17 01:14:19.788137 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.788140 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.788143 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.788147 | orchestrator | 2026-03-17 01:14:19.788150 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-03-17 01:14:19.788154 | orchestrator | Tuesday 17 March 2026 01:09:51 +0000 (0:00:00.324) 0:03:30.214 ********* 2026-03-17 01:14:19.788168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 01:14:19.788177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 01:14:19.788181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-17 01:14:19.788185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788209 | orchestrator | 2026-03-17 01:14:19.788212 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-17 01:14:19.788216 | orchestrator | Tuesday 17 March 2026 01:09:53 +0000 (0:00:01.989) 0:03:32.204 ********* 2026-03-17 01:14:19.788219 | orchestrator | 2026-03-17 01:14:19.788223 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-17 01:14:19.788228 | orchestrator | Tuesday 17 March 2026 01:09:54 +0000 (0:00:00.133) 0:03:32.337 ********* 2026-03-17 01:14:19.788231 | orchestrator | 2026-03-17 01:14:19.788235 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-17 01:14:19.788238 | orchestrator | Tuesday 17 March 2026 01:09:54 +0000 (0:00:00.137) 0:03:32.474 ********* 2026-03-17 01:14:19.788242 | orchestrator | 2026-03-17 01:14:19.788245 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-17 01:14:19.788249 | orchestrator | Tuesday 17 March 2026 01:09:54 +0000 (0:00:00.135) 0:03:32.610 ********* 2026-03-17 01:14:19.788252 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:19.788256 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:14:19.788259 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:14:19.788263 | orchestrator | 2026-03-17 01:14:19.788266 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-17 01:14:19.788270 | orchestrator | Tuesday 17 March 2026 01:10:13 +0000 (0:00:18.966) 0:03:51.576 ********* 2026-03-17 01:14:19.788273 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:19.788277 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:14:19.788281 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:14:19.788284 | orchestrator | 2026-03-17 01:14:19.788288 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-17 01:14:19.788291 | orchestrator | 2026-03-17 01:14:19.788295 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-17 01:14:19.788329 | orchestrator | Tuesday 17 March 2026 01:10:17 +0000 (0:00:04.377) 0:03:55.954 ********* 2026-03-17 01:14:19.788333 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:14:19.788336 | orchestrator | 2026-03-17 01:14:19.788340 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-17 01:14:19.788343 | orchestrator | Tuesday 17 March 2026 01:10:18 +0000 (0:00:00.976) 0:03:56.931 ********* 2026-03-17 01:14:19.788346 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:14:19.788349 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:14:19.788352 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:14:19.788355 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.788358 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.788361 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.788364 | orchestrator | 2026-03-17 01:14:19.788367 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-17 01:14:19.788370 | orchestrator | Tuesday 17 March 2026 01:10:19 +0000 (0:00:00.534) 0:03:57.465 ********* 2026-03-17 01:14:19.788373 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.788377 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.788380 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.788383 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:14:19.788389 | orchestrator | 2026-03-17 01:14:19.788392 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-17 01:14:19.788395 | orchestrator | Tuesday 17 March 2026 01:10:20 +0000 (0:00:00.852) 0:03:58.317 ********* 2026-03-17 01:14:19.788398 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-17 01:14:19.788402 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-17 01:14:19.788405 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-17 01:14:19.788408 | orchestrator | 2026-03-17 01:14:19.788411 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-17 01:14:19.788414 | orchestrator | Tuesday 17 March 2026 01:10:20 +0000 (0:00:00.653) 0:03:58.971 ********* 2026-03-17 01:14:19.788417 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-17 01:14:19.788420 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-17 01:14:19.788423 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-17 01:14:19.788426 | orchestrator | 2026-03-17 01:14:19.788430 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-17 01:14:19.788433 | orchestrator | Tuesday 17 March 2026 01:10:21 +0000 (0:00:01.084) 0:04:00.055 ********* 2026-03-17 01:14:19.788436 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-17 01:14:19.788439 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:14:19.788442 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-17 01:14:19.788445 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:14:19.788448 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-17 01:14:19.788451 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:14:19.788455 | orchestrator | 2026-03-17 01:14:19.788458 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-17 01:14:19.788461 | orchestrator | Tuesday 17 March 2026 01:10:22 +0000 (0:00:00.489) 0:04:00.545 ********* 2026-03-17 01:14:19.788464 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-17 01:14:19.788479 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-17 01:14:19.788483 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.788486 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-17 01:14:19.788489 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-17 01:14:19.788492 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-17 01:14:19.788495 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-17 01:14:19.788498 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.788501 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-17 01:14:19.788505 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-17 01:14:19.788508 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-17 01:14:19.788511 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.788516 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-17 01:14:19.788519 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-17 01:14:19.788522 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-17 01:14:19.788525 | orchestrator | 2026-03-17 01:14:19.788528 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-17 01:14:19.788531 | orchestrator | Tuesday 17 March 2026 01:10:23 +0000 (0:00:00.991) 0:04:01.536 ********* 2026-03-17 01:14:19.788534 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.788537 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.788540 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.788544 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:14:19.788549 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:14:19.788552 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:14:19.788555 | orchestrator | 2026-03-17 01:14:19.788558 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-17 01:14:19.788561 | orchestrator | Tuesday 17 March 2026 01:10:24 +0000 (0:00:01.087) 0:04:02.624 ********* 2026-03-17 01:14:19.788564 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.788567 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.788570 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.788573 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:14:19.788577 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:14:19.788580 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:14:19.788583 | orchestrator | 2026-03-17 01:14:19.788586 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-17 01:14:19.788589 | orchestrator | Tuesday 17 March 2026 01:10:25 +0000 (0:00:01.599) 0:04:04.223 ********* 2026-03-17 01:14:19.788593 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788597 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788611 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788616 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788629 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788645 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788652 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788661 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788681 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788685 | orchestrator | 2026-03-17 01:14:19.788688 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-17 01:14:19.788691 | orchestrator | Tuesday 17 March 2026 01:10:27 +0000 (0:00:01.896) 0:04:06.120 ********* 2026-03-17 01:14:19.788695 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:14:19.788701 | orchestrator | 2026-03-17 01:14:19.788705 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-17 01:14:19.788708 | orchestrator | Tuesday 17 March 2026 01:10:28 +0000 (0:00:00.986) 0:04:07.106 ********* 2026-03-17 01:14:19.788712 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788716 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788719 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788747 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788750 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788753 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788781 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788785 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788788 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.788791 | orchestrator | 2026-03-17 01:14:19.788794 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-17 01:14:19.788798 | orchestrator | Tuesday 17 March 2026 01:10:32 +0000 (0:00:03.359) 0:04:10.465 ********* 2026-03-17 01:14:19.788801 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:14:19.788826 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:14:19.788832 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:14:19.788836 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-17 01:14:19.788839 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:14:19.788842 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:14:19.788846 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-17 01:14:19.788849 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:14:19.788862 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:14:19.788872 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:14:19.788876 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-17 01:14:19.788879 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:14:19.788882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-17 01:14:19.788885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:14:19.788889 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.788892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-17 01:14:19.788907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:14:19.788911 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.788916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-17 01:14:19.788919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:14:19.788922 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.788926 | orchestrator | 2026-03-17 01:14:19.788929 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-17 01:14:19.788932 | orchestrator | Tuesday 17 March 2026 01:10:33 +0000 (0:00:01.332) 0:04:11.798 ********* 2026-03-17 01:14:19.788935 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:14:19.788938 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:14:19.788944 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-17 01:14:19.788950 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:14:19.788968 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:14:19.788978 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:14:19.788983 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-17 01:14:19.788989 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:14:19.788994 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:14:19.789002 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:14:19.789015 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-17 01:14:19.789019 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:14:19.789024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-17 01:14:19.789027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:14:19.789030 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.789033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-17 01:14:19.789036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:14:19.789042 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.789045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-17 01:14:19.789048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:14:19.789060 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.789064 | orchestrator | 2026-03-17 01:14:19.789067 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-17 01:14:19.789070 | orchestrator | Tuesday 17 March 2026 01:10:35 +0000 (0:00:01.727) 0:04:13.526 ********* 2026-03-17 01:14:19.789073 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.789077 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.789080 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.789083 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:14:19.789086 | orchestrator | 2026-03-17 01:14:19.789089 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-17 01:14:19.789092 | orchestrator | Tuesday 17 March 2026 01:10:36 +0000 (0:00:00.845) 0:04:14.371 ********* 2026-03-17 01:14:19.789095 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-17 01:14:19.789098 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-17 01:14:19.789101 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-17 01:14:19.789104 | orchestrator | 2026-03-17 01:14:19.789109 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-17 01:14:19.789112 | orchestrator | Tuesday 17 March 2026 01:10:36 +0000 (0:00:00.794) 0:04:15.165 ********* 2026-03-17 01:14:19.789115 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-17 01:14:19.789118 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-17 01:14:19.789121 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-17 01:14:19.789124 | orchestrator | 2026-03-17 01:14:19.789128 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-17 01:14:19.789131 | orchestrator | Tuesday 17 March 2026 01:10:37 +0000 (0:00:00.779) 0:04:15.945 ********* 2026-03-17 01:14:19.789134 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:14:19.789137 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:14:19.789140 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:14:19.789143 | orchestrator | 2026-03-17 01:14:19.789146 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-17 01:14:19.789149 | orchestrator | Tuesday 17 March 2026 01:10:38 +0000 (0:00:00.422) 0:04:16.367 ********* 2026-03-17 01:14:19.789152 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:14:19.789155 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:14:19.789158 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:14:19.789161 | orchestrator | 2026-03-17 01:14:19.789164 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-17 01:14:19.789171 | orchestrator | Tuesday 17 March 2026 01:10:38 +0000 (0:00:00.607) 0:04:16.975 ********* 2026-03-17 01:14:19.789174 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-17 01:14:19.789178 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-17 01:14:19.789181 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-17 01:14:19.789184 | orchestrator | 2026-03-17 01:14:19.789187 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-17 01:14:19.789190 | orchestrator | Tuesday 17 March 2026 01:10:39 +0000 (0:00:01.268) 0:04:18.243 ********* 2026-03-17 01:14:19.789193 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-17 01:14:19.789196 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-17 01:14:19.789199 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-17 01:14:19.789202 | orchestrator | 2026-03-17 01:14:19.789205 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-17 01:14:19.789208 | orchestrator | Tuesday 17 March 2026 01:10:41 +0000 (0:00:01.332) 0:04:19.575 ********* 2026-03-17 01:14:19.789211 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-17 01:14:19.789214 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-17 01:14:19.789217 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-17 01:14:19.789220 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-17 01:14:19.789223 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-17 01:14:19.789226 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-17 01:14:19.789229 | orchestrator | 2026-03-17 01:14:19.789233 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-17 01:14:19.789236 | orchestrator | Tuesday 17 March 2026 01:10:44 +0000 (0:00:03.500) 0:04:23.076 ********* 2026-03-17 01:14:19.789239 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:14:19.789242 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:14:19.789245 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:14:19.789248 | orchestrator | 2026-03-17 01:14:19.789251 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-17 01:14:19.789254 | orchestrator | Tuesday 17 March 2026 01:10:45 +0000 (0:00:00.419) 0:04:23.495 ********* 2026-03-17 01:14:19.789257 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:14:19.789260 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:14:19.789263 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:14:19.789266 | orchestrator | 2026-03-17 01:14:19.789269 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-17 01:14:19.789273 | orchestrator | Tuesday 17 March 2026 01:10:45 +0000 (0:00:00.285) 0:04:23.781 ********* 2026-03-17 01:14:19.789276 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:14:19.789279 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:14:19.789282 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:14:19.789285 | orchestrator | 2026-03-17 01:14:19.789288 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-17 01:14:19.789291 | orchestrator | Tuesday 17 March 2026 01:10:46 +0000 (0:00:01.039) 0:04:24.821 ********* 2026-03-17 01:14:19.789294 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-17 01:14:19.789298 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-17 01:14:19.789311 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-17 01:14:19.789315 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-17 01:14:19.789318 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-17 01:14:19.789324 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-17 01:14:19.789327 | orchestrator | 2026-03-17 01:14:19.789330 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-17 01:14:19.789333 | orchestrator | Tuesday 17 March 2026 01:10:49 +0000 (0:00:02.763) 0:04:27.585 ********* 2026-03-17 01:14:19.789336 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-17 01:14:19.789339 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-17 01:14:19.789344 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-17 01:14:19.789347 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-17 01:14:19.789350 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:14:19.789354 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-17 01:14:19.789357 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:14:19.789360 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-17 01:14:19.789363 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:14:19.789366 | orchestrator | 2026-03-17 01:14:19.789369 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-17 01:14:19.789373 | orchestrator | Tuesday 17 March 2026 01:10:52 +0000 (0:00:02.917) 0:04:30.502 ********* 2026-03-17 01:14:19.789376 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:14:19.789379 | orchestrator | 2026-03-17 01:14:19.789382 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-17 01:14:19.789385 | orchestrator | Tuesday 17 March 2026 01:10:52 +0000 (0:00:00.122) 0:04:30.625 ********* 2026-03-17 01:14:19.789388 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:14:19.789391 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:14:19.789394 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:14:19.789397 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.789400 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.789403 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.789406 | orchestrator | 2026-03-17 01:14:19.789409 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-17 01:14:19.789412 | orchestrator | Tuesday 17 March 2026 01:10:52 +0000 (0:00:00.499) 0:04:31.124 ********* 2026-03-17 01:14:19.789416 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-17 01:14:19.789419 | orchestrator | 2026-03-17 01:14:19.789422 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-17 01:14:19.789425 | orchestrator | Tuesday 17 March 2026 01:10:53 +0000 (0:00:00.609) 0:04:31.734 ********* 2026-03-17 01:14:19.789428 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:14:19.789431 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:14:19.789434 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:14:19.789437 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.789440 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.789443 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.789446 | orchestrator | 2026-03-17 01:14:19.789450 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-17 01:14:19.789453 | orchestrator | Tuesday 17 March 2026 01:10:54 +0000 (0:00:00.636) 0:04:32.371 ********* 2026-03-17 01:14:19.789457 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:14:19.789467 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:14:19.789478 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:14:19.789484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:14:19.789489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:14:19.789494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:14:19.789500 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:14:19.789509 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:14:19.789519 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:14:19.789526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.789530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.789533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.789537 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.789544 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.789549 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.789552 | orchestrator | 2026-03-17 01:14:19.789556 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-17 01:14:19.789561 | orchestrator | Tuesday 17 March 2026 01:10:57 +0000 (0:00:03.392) 0:04:35.764 ********* 2026-03-17 01:14:19.789567 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:14:19.789573 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:14:19.789578 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:14:19.789586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:14:19.789594 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:14:19.789603 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:14:19.789608 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.789614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:14:19.789622 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.789630 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.789634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:14:19.789639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:14:19.789643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.789646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.789652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.789655 | orchestrator | 2026-03-17 01:14:19.789658 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-17 01:14:19.789661 | orchestrator | Tuesday 17 March 2026 01:11:04 +0000 (0:00:06.716) 0:04:42.480 ********* 2026-03-17 01:14:19.789664 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:14:19.789668 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:14:19.789671 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:14:19.789674 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.789677 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.789680 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.789683 | orchestrator | 2026-03-17 01:14:19.789686 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-17 01:14:19.789689 | orchestrator | Tuesday 17 March 2026 01:11:05 +0000 (0:00:01.366) 0:04:43.846 ********* 2026-03-17 01:14:19.789692 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-17 01:14:19.789695 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-17 01:14:19.789698 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-17 01:14:19.789702 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-17 01:14:19.789705 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-17 01:14:19.789710 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-17 01:14:19.789713 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.789716 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-17 01:14:19.789719 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.789722 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-17 01:14:19.789725 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-17 01:14:19.789728 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.789731 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-17 01:14:19.789735 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-17 01:14:19.789738 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-17 01:14:19.789741 | orchestrator | 2026-03-17 01:14:19.789745 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-17 01:14:19.789749 | orchestrator | Tuesday 17 March 2026 01:11:08 +0000 (0:00:03.356) 0:04:47.203 ********* 2026-03-17 01:14:19.789752 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:14:19.789755 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:14:19.789758 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:14:19.789761 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.789764 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.789767 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.789772 | orchestrator | 2026-03-17 01:14:19.789775 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-17 01:14:19.789778 | orchestrator | Tuesday 17 March 2026 01:11:09 +0000 (0:00:00.567) 0:04:47.770 ********* 2026-03-17 01:14:19.789782 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-17 01:14:19.789785 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-17 01:14:19.789788 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-17 01:14:19.789791 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-17 01:14:19.789795 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-17 01:14:19.789798 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-17 01:14:19.789801 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-17 01:14:19.789804 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-17 01:14:19.789807 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-17 01:14:19.789828 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-17 01:14:19.789833 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.789839 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-17 01:14:19.789844 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.789849 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-17 01:14:19.789854 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.789860 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-17 01:14:19.789863 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-17 01:14:19.789868 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-17 01:14:19.789873 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-17 01:14:19.789881 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-17 01:14:19.789886 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-17 01:14:19.789890 | orchestrator | 2026-03-17 01:14:19.789895 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-17 01:14:19.789900 | orchestrator | Tuesday 17 March 2026 01:11:14 +0000 (0:00:04.618) 0:04:52.388 ********* 2026-03-17 01:14:19.789905 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-17 01:14:19.789909 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-17 01:14:19.789914 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-17 01:14:19.789922 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-17 01:14:19.789927 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-17 01:14:19.789932 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-17 01:14:19.789941 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-17 01:14:19.789947 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-17 01:14:19.789953 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-17 01:14:19.789958 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-17 01:14:19.789963 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-17 01:14:19.789967 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-17 01:14:19.789975 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-17 01:14:19.789979 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.789982 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-17 01:14:19.789985 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.789988 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-17 01:14:19.789991 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.789994 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-17 01:14:19.789997 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-17 01:14:19.790000 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-17 01:14:19.790003 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-17 01:14:19.790006 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-17 01:14:19.790009 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-17 01:14:19.790051 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-17 01:14:19.790060 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-17 01:14:19.790066 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-17 01:14:19.790071 | orchestrator | 2026-03-17 01:14:19.790076 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-17 01:14:19.790081 | orchestrator | Tuesday 17 March 2026 01:11:20 +0000 (0:00:06.663) 0:04:59.052 ********* 2026-03-17 01:14:19.790087 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:14:19.790090 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:14:19.790093 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:14:19.790096 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.790099 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.790104 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.790109 | orchestrator | 2026-03-17 01:14:19.790114 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-17 01:14:19.790119 | orchestrator | Tuesday 17 March 2026 01:11:21 +0000 (0:00:00.729) 0:04:59.781 ********* 2026-03-17 01:14:19.790123 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:14:19.790128 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:14:19.790133 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:14:19.790138 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.790143 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.790148 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.790153 | orchestrator | 2026-03-17 01:14:19.790158 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-17 01:14:19.790163 | orchestrator | Tuesday 17 March 2026 01:11:22 +0000 (0:00:00.589) 0:05:00.371 ********* 2026-03-17 01:14:19.790168 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.790173 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:14:19.790181 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.790184 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.790190 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:14:19.790194 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:14:19.790199 | orchestrator | 2026-03-17 01:14:19.790204 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-17 01:14:19.790209 | orchestrator | Tuesday 17 March 2026 01:11:24 +0000 (0:00:02.244) 0:05:02.616 ********* 2026-03-17 01:14:19.790221 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:14:19.790228 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:14:19.790238 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-17 01:14:19.790244 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:14:19.790250 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:14:19.790253 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:14:19.790260 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-17 01:14:19.790263 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:14:19.790271 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:14:19.790277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:14:19.790280 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-17 01:14:19.790283 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:14:19.790287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-17 01:14:19.790292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:14:19.790296 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.790299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-17 01:14:19.790305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:14:19.790309 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.790314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-17 01:14:19.790317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:14:19.790320 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.790324 | orchestrator | 2026-03-17 01:14:19.790327 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-17 01:14:19.790330 | orchestrator | Tuesday 17 March 2026 01:11:25 +0000 (0:00:01.425) 0:05:04.041 ********* 2026-03-17 01:14:19.790333 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-17 01:14:19.790336 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-17 01:14:19.790343 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:14:19.790346 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-17 01:14:19.790349 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-17 01:14:19.790352 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:14:19.790355 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-17 01:14:19.790358 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-17 01:14:19.790361 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:14:19.790364 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-17 01:14:19.790367 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-17 01:14:19.790371 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.790374 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-17 01:14:19.790377 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-17 01:14:19.790380 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.790383 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-17 01:14:19.790386 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-17 01:14:19.790389 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.790392 | orchestrator | 2026-03-17 01:14:19.790395 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-03-17 01:14:19.790398 | orchestrator | Tuesday 17 March 2026 01:11:26 +0000 (0:00:00.781) 0:05:04.823 ********* 2026-03-17 01:14:19.790401 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:14:19.790408 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:14:19.790414 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:14:19.790420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:14:19.790423 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:14:19.790427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:14:19.790432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:14:19.790435 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:14:19.790440 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:14:19.790444 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.790449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.790452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.790456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.790461 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.790467 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:19.790472 | orchestrator | 2026-03-17 01:14:19.790475 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-17 01:14:19.790479 | orchestrator | Tuesday 17 March 2026 01:11:29 +0000 (0:00:03.075) 0:05:07.899 ********* 2026-03-17 01:14:19.790482 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:14:19.790485 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:14:19.790488 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:14:19.790491 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.790494 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.790497 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.790500 | orchestrator | 2026-03-17 01:14:19.790503 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-17 01:14:19.790506 | orchestrator | Tuesday 17 March 2026 01:11:30 +0000 (0:00:00.774) 0:05:08.673 ********* 2026-03-17 01:14:19.790510 | orchestrator | 2026-03-17 01:14:19.790513 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-17 01:14:19.790516 | orchestrator | Tuesday 17 March 2026 01:11:30 +0000 (0:00:00.133) 0:05:08.807 ********* 2026-03-17 01:14:19.790519 | orchestrator | 2026-03-17 01:14:19.790522 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-17 01:14:19.790525 | orchestrator | Tuesday 17 March 2026 01:11:30 +0000 (0:00:00.126) 0:05:08.934 ********* 2026-03-17 01:14:19.790528 | orchestrator | 2026-03-17 01:14:19.790531 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-17 01:14:19.790534 | orchestrator | Tuesday 17 March 2026 01:11:30 +0000 (0:00:00.127) 0:05:09.062 ********* 2026-03-17 01:14:19.790537 | orchestrator | 2026-03-17 01:14:19.790540 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-17 01:14:19.790543 | orchestrator | Tuesday 17 March 2026 01:11:30 +0000 (0:00:00.138) 0:05:09.200 ********* 2026-03-17 01:14:19.790546 | orchestrator | 2026-03-17 01:14:19.790549 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-17 01:14:19.790553 | orchestrator | Tuesday 17 March 2026 01:11:31 +0000 (0:00:00.122) 0:05:09.322 ********* 2026-03-17 01:14:19.790556 | orchestrator | 2026-03-17 01:14:19.790559 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-17 01:14:19.790562 | orchestrator | Tuesday 17 March 2026 01:11:31 +0000 (0:00:00.275) 0:05:09.598 ********* 2026-03-17 01:14:19.790565 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:19.790568 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:14:19.790571 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:14:19.790574 | orchestrator | 2026-03-17 01:14:19.790577 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-17 01:14:19.790580 | orchestrator | Tuesday 17 March 2026 01:11:42 +0000 (0:00:11.040) 0:05:20.639 ********* 2026-03-17 01:14:19.790583 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:19.790586 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:14:19.790590 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:14:19.790593 | orchestrator | 2026-03-17 01:14:19.790596 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-17 01:14:19.790599 | orchestrator | Tuesday 17 March 2026 01:11:53 +0000 (0:00:11.453) 0:05:32.092 ********* 2026-03-17 01:14:19.790602 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:14:19.790605 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:14:19.790608 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:14:19.790611 | orchestrator | 2026-03-17 01:14:19.790614 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-17 01:14:19.790617 | orchestrator | Tuesday 17 March 2026 01:12:14 +0000 (0:00:20.787) 0:05:52.880 ********* 2026-03-17 01:14:19.790620 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:14:19.790623 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:14:19.790628 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:14:19.790631 | orchestrator | 2026-03-17 01:14:19.790635 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-17 01:14:19.790638 | orchestrator | Tuesday 17 March 2026 01:12:44 +0000 (0:00:30.257) 0:06:23.137 ********* 2026-03-17 01:14:19.790641 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:14:19.790644 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:14:19.790647 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:14:19.790650 | orchestrator | 2026-03-17 01:14:19.790653 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-17 01:14:19.790658 | orchestrator | Tuesday 17 March 2026 01:12:45 +0000 (0:00:00.667) 0:06:23.805 ********* 2026-03-17 01:14:19.790661 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:14:19.790664 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:14:19.790667 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:14:19.790670 | orchestrator | 2026-03-17 01:14:19.790673 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-17 01:14:19.790676 | orchestrator | Tuesday 17 March 2026 01:12:46 +0000 (0:00:00.681) 0:06:24.486 ********* 2026-03-17 01:14:19.790680 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:14:19.790683 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:14:19.790686 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:14:19.790689 | orchestrator | 2026-03-17 01:14:19.790692 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-17 01:14:19.790695 | orchestrator | Tuesday 17 March 2026 01:13:11 +0000 (0:00:24.801) 0:06:49.288 ********* 2026-03-17 01:14:19.790698 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:14:19.790701 | orchestrator | 2026-03-17 01:14:19.790704 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-17 01:14:19.790709 | orchestrator | Tuesday 17 March 2026 01:13:11 +0000 (0:00:00.120) 0:06:49.408 ********* 2026-03-17 01:14:19.790712 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:14:19.790715 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:14:19.790718 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.790721 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.790725 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.790728 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-17 01:14:19.790731 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-17 01:14:19.790734 | orchestrator | 2026-03-17 01:14:19.790737 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-17 01:14:19.790740 | orchestrator | Tuesday 17 March 2026 01:13:32 +0000 (0:00:21.686) 0:07:11.094 ********* 2026-03-17 01:14:19.790743 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:14:19.790747 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.790750 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:14:19.790753 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:14:19.790756 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.790759 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.790762 | orchestrator | 2026-03-17 01:14:19.790765 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-17 01:14:19.790768 | orchestrator | Tuesday 17 March 2026 01:13:41 +0000 (0:00:08.853) 0:07:19.947 ********* 2026-03-17 01:14:19.790771 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:14:19.790774 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.790777 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.790780 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:14:19.790783 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.790787 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-03-17 01:14:19.790790 | orchestrator | 2026-03-17 01:14:19.790793 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-17 01:14:19.790796 | orchestrator | Tuesday 17 March 2026 01:13:45 +0000 (0:00:03.332) 0:07:23.280 ********* 2026-03-17 01:14:19.790801 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-17 01:14:19.790805 | orchestrator | 2026-03-17 01:14:19.790808 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-17 01:14:19.790827 | orchestrator | Tuesday 17 March 2026 01:13:57 +0000 (0:00:12.739) 0:07:36.019 ********* 2026-03-17 01:14:19.790832 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-17 01:14:19.790837 | orchestrator | 2026-03-17 01:14:19.790840 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-17 01:14:19.790843 | orchestrator | Tuesday 17 March 2026 01:13:58 +0000 (0:00:01.185) 0:07:37.204 ********* 2026-03-17 01:14:19.790847 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:14:19.790850 | orchestrator | 2026-03-17 01:14:19.790853 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-17 01:14:19.790856 | orchestrator | Tuesday 17 March 2026 01:14:00 +0000 (0:00:01.219) 0:07:38.424 ********* 2026-03-17 01:14:19.790859 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-17 01:14:19.790864 | orchestrator | 2026-03-17 01:14:19.790869 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-03-17 01:14:19.790874 | orchestrator | Tuesday 17 March 2026 01:14:12 +0000 (0:00:12.548) 0:07:50.972 ********* 2026-03-17 01:14:19.790879 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:14:19.790884 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:14:19.790889 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:14:19.790894 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:14:19.790899 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:14:19.790904 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:14:19.790907 | orchestrator | 2026-03-17 01:14:19.790911 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-17 01:14:19.790914 | orchestrator | 2026-03-17 01:14:19.790917 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-17 01:14:19.790920 | orchestrator | Tuesday 17 March 2026 01:14:14 +0000 (0:00:01.721) 0:07:52.694 ********* 2026-03-17 01:14:19.790923 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:19.790926 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:14:19.790929 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:14:19.790932 | orchestrator | 2026-03-17 01:14:19.790935 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-17 01:14:19.790939 | orchestrator | 2026-03-17 01:14:19.790942 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-17 01:14:19.790945 | orchestrator | Tuesday 17 March 2026 01:14:15 +0000 (0:00:01.051) 0:07:53.745 ********* 2026-03-17 01:14:19.790948 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.790951 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.790954 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.790957 | orchestrator | 2026-03-17 01:14:19.790960 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-17 01:14:19.790963 | orchestrator | 2026-03-17 01:14:19.790969 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-17 01:14:19.790973 | orchestrator | Tuesday 17 March 2026 01:14:16 +0000 (0:00:00.517) 0:07:54.263 ********* 2026-03-17 01:14:19.790976 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-17 01:14:19.790979 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-17 01:14:19.790982 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-17 01:14:19.790986 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-17 01:14:19.790989 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-17 01:14:19.790992 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-17 01:14:19.790995 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:14:19.790998 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-17 01:14:19.791004 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-17 01:14:19.791008 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-17 01:14:19.791013 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-17 01:14:19.791016 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-17 01:14:19.791019 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-17 01:14:19.791022 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:14:19.791025 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-17 01:14:19.791028 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-17 01:14:19.791031 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-17 01:14:19.791034 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-17 01:14:19.791038 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-17 01:14:19.791041 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-17 01:14:19.791044 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:14:19.791047 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-17 01:14:19.791050 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-17 01:14:19.791053 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-17 01:14:19.791056 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-17 01:14:19.791059 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-17 01:14:19.791062 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-17 01:14:19.791065 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.791069 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-17 01:14:19.791072 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-17 01:14:19.791075 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-17 01:14:19.791078 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-17 01:14:19.791081 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-17 01:14:19.791084 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-17 01:14:19.791088 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.791091 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-17 01:14:19.791094 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-17 01:14:19.791097 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-17 01:14:19.791100 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-17 01:14:19.791103 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-17 01:14:19.791106 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-17 01:14:19.791109 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.791112 | orchestrator | 2026-03-17 01:14:19.791115 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-17 01:14:19.791118 | orchestrator | 2026-03-17 01:14:19.791121 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-17 01:14:19.791124 | orchestrator | Tuesday 17 March 2026 01:14:17 +0000 (0:00:01.113) 0:07:55.377 ********* 2026-03-17 01:14:19.791128 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-17 01:14:19.791131 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-17 01:14:19.791134 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.791137 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-17 01:14:19.791140 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-17 01:14:19.791143 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.791146 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-17 01:14:19.791149 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-17 01:14:19.791154 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.791157 | orchestrator | 2026-03-17 01:14:19.791160 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-17 01:14:19.791163 | orchestrator | 2026-03-17 01:14:19.791166 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-17 01:14:19.791169 | orchestrator | Tuesday 17 March 2026 01:14:17 +0000 (0:00:00.615) 0:07:55.992 ********* 2026-03-17 01:14:19.791173 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.791176 | orchestrator | 2026-03-17 01:14:19.791179 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-17 01:14:19.791182 | orchestrator | 2026-03-17 01:14:19.791185 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-17 01:14:19.791188 | orchestrator | Tuesday 17 March 2026 01:14:18 +0000 (0:00:00.586) 0:07:56.579 ********* 2026-03-17 01:14:19.791191 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:19.791194 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:19.791199 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:19.791202 | orchestrator | 2026-03-17 01:14:19.791205 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:14:19.791208 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:14:19.791212 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-03-17 01:14:19.791215 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-17 01:14:19.791218 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-17 01:14:19.791223 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-17 01:14:19.791227 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-03-17 01:14:19.791230 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-17 01:14:19.791233 | orchestrator | 2026-03-17 01:14:19.791236 | orchestrator | 2026-03-17 01:14:19.791239 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:14:19.791242 | orchestrator | Tuesday 17 March 2026 01:14:18 +0000 (0:00:00.372) 0:07:56.952 ********* 2026-03-17 01:14:19.791245 | orchestrator | =============================================================================== 2026-03-17 01:14:19.791249 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 32.86s 2026-03-17 01:14:19.791252 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 30.26s 2026-03-17 01:14:19.791255 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 24.80s 2026-03-17 01:14:19.791258 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.11s 2026-03-17 01:14:19.791261 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.69s 2026-03-17 01:14:19.791264 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 20.79s 2026-03-17 01:14:19.791267 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 18.97s 2026-03-17 01:14:19.791270 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.90s 2026-03-17 01:14:19.791273 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.54s 2026-03-17 01:14:19.791276 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.79s 2026-03-17 01:14:19.791283 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.16s 2026-03-17 01:14:19.791286 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.74s 2026-03-17 01:14:19.791289 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.55s 2026-03-17 01:14:19.791292 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 11.45s 2026-03-17 01:14:19.791295 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.15s 2026-03-17 01:14:19.791298 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 11.04s 2026-03-17 01:14:19.791301 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.85s 2026-03-17 01:14:19.791304 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 8.08s 2026-03-17 01:14:19.791307 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 7.10s 2026-03-17 01:14:19.791310 | orchestrator | nova-cell : Copying over nova.conf -------------------------------------- 6.72s 2026-03-17 01:14:19.791314 | orchestrator | 2026-03-17 01:14:19 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:22.820767 | orchestrator | 2026-03-17 01:14:22 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:14:22.820851 | orchestrator | 2026-03-17 01:14:22 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:25.865126 | orchestrator | 2026-03-17 01:14:25 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:14:25.865186 | orchestrator | 2026-03-17 01:14:25 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:28.903698 | orchestrator | 2026-03-17 01:14:28 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:14:28.903762 | orchestrator | 2026-03-17 01:14:28 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:31.945453 | orchestrator | 2026-03-17 01:14:31 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:14:31.945555 | orchestrator | 2026-03-17 01:14:31 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:34.985602 | orchestrator | 2026-03-17 01:14:34 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:14:34.985667 | orchestrator | 2026-03-17 01:14:34 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:38.025101 | orchestrator | 2026-03-17 01:14:38 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:14:38.025158 | orchestrator | 2026-03-17 01:14:38 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:41.063180 | orchestrator | 2026-03-17 01:14:41 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:14:41.063251 | orchestrator | 2026-03-17 01:14:41 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:44.105510 | orchestrator | 2026-03-17 01:14:44 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:14:44.105582 | orchestrator | 2026-03-17 01:14:44 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:47.151732 | orchestrator | 2026-03-17 01:14:47 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:14:47.151828 | orchestrator | 2026-03-17 01:14:47 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:50.193664 | orchestrator | 2026-03-17 01:14:50 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state STARTED 2026-03-17 01:14:50.193718 | orchestrator | 2026-03-17 01:14:50 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:14:53.240276 | orchestrator | 2026-03-17 01:14:53 | INFO  | Task e5e38a61-da71-47c6-8b6b-4e0117b1c2bc is in state SUCCESS 2026-03-17 01:14:53.241950 | orchestrator | 2026-03-17 01:14:53.242054 | orchestrator | 2026-03-17 01:14:53.242064 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:14:53.242068 | orchestrator | 2026-03-17 01:14:53.242071 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:14:53.242075 | orchestrator | Tuesday 17 March 2026 01:10:24 +0000 (0:00:00.229) 0:00:00.229 ********* 2026-03-17 01:14:53.242078 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:14:53.242082 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:14:53.242085 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:14:53.242088 | orchestrator | 2026-03-17 01:14:53.242091 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:14:53.242095 | orchestrator | Tuesday 17 March 2026 01:10:24 +0000 (0:00:00.253) 0:00:00.482 ********* 2026-03-17 01:14:53.242098 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-17 01:14:53.242101 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-17 01:14:53.242104 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-17 01:14:53.242107 | orchestrator | 2026-03-17 01:14:53.242110 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-17 01:14:53.242113 | orchestrator | 2026-03-17 01:14:53.242117 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-17 01:14:53.242120 | orchestrator | Tuesday 17 March 2026 01:10:24 +0000 (0:00:00.362) 0:00:00.845 ********* 2026-03-17 01:14:53.242123 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:14:53.242126 | orchestrator | 2026-03-17 01:14:53.242129 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-03-17 01:14:53.242132 | orchestrator | Tuesday 17 March 2026 01:10:25 +0000 (0:00:00.485) 0:00:01.330 ********* 2026-03-17 01:14:53.242136 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-17 01:14:53.242139 | orchestrator | 2026-03-17 01:14:53.242142 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-03-17 01:14:53.242145 | orchestrator | Tuesday 17 March 2026 01:10:29 +0000 (0:00:03.773) 0:00:05.104 ********* 2026-03-17 01:14:53.242265 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-17 01:14:53.242271 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-17 01:14:53.242274 | orchestrator | 2026-03-17 01:14:53.242277 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-17 01:14:53.242280 | orchestrator | Tuesday 17 March 2026 01:10:35 +0000 (0:00:06.201) 0:00:11.306 ********* 2026-03-17 01:14:53.242284 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-17 01:14:53.242287 | orchestrator | 2026-03-17 01:14:53.242290 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-17 01:14:53.242293 | orchestrator | Tuesday 17 March 2026 01:10:38 +0000 (0:00:02.892) 0:00:14.198 ********* 2026-03-17 01:14:53.242296 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-17 01:14:53.242300 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-17 01:14:53.242303 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-17 01:14:53.242306 | orchestrator | 2026-03-17 01:14:53.242309 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-17 01:14:53.242312 | orchestrator | Tuesday 17 March 2026 01:10:47 +0000 (0:00:08.833) 0:00:23.031 ********* 2026-03-17 01:14:53.242315 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-17 01:14:53.242318 | orchestrator | 2026-03-17 01:14:53.242322 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-03-17 01:14:53.242325 | orchestrator | Tuesday 17 March 2026 01:10:50 +0000 (0:00:02.988) 0:00:26.020 ********* 2026-03-17 01:14:53.242328 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-17 01:14:53.242339 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-17 01:14:53.242342 | orchestrator | 2026-03-17 01:14:53.242345 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-17 01:14:53.242349 | orchestrator | Tuesday 17 March 2026 01:10:57 +0000 (0:00:07.105) 0:00:33.126 ********* 2026-03-17 01:14:53.242352 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-17 01:14:53.242355 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-17 01:14:53.242404 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-17 01:14:53.242408 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-17 01:14:53.242411 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-17 01:14:53.242415 | orchestrator | 2026-03-17 01:14:53.242418 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-17 01:14:53.242421 | orchestrator | Tuesday 17 March 2026 01:11:13 +0000 (0:00:16.304) 0:00:49.430 ********* 2026-03-17 01:14:53.242430 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:14:53.242433 | orchestrator | 2026-03-17 01:14:53.242436 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-17 01:14:53.242439 | orchestrator | Tuesday 17 March 2026 01:11:14 +0000 (0:00:00.488) 0:00:49.919 ********* 2026-03-17 01:14:53.242564 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:53.242571 | orchestrator | 2026-03-17 01:14:53.242580 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-17 01:14:53.242585 | orchestrator | Tuesday 17 March 2026 01:11:20 +0000 (0:00:06.393) 0:00:56.313 ********* 2026-03-17 01:14:53.242590 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:53.242595 | orchestrator | 2026-03-17 01:14:53.242600 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-17 01:14:53.242627 | orchestrator | Tuesday 17 March 2026 01:11:24 +0000 (0:00:04.330) 0:01:00.644 ********* 2026-03-17 01:14:53.242634 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:14:53.242639 | orchestrator | 2026-03-17 01:14:53.242644 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-17 01:14:53.242649 | orchestrator | Tuesday 17 March 2026 01:11:28 +0000 (0:00:03.823) 0:01:04.467 ********* 2026-03-17 01:14:53.242654 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-17 01:14:53.242660 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-17 01:14:53.242665 | orchestrator | 2026-03-17 01:14:53.242670 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-17 01:14:53.242675 | orchestrator | Tuesday 17 March 2026 01:11:38 +0000 (0:00:10.314) 0:01:14.782 ********* 2026-03-17 01:14:53.242680 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-17 01:14:53.242685 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-17 01:14:53.242691 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-17 01:14:53.242697 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-17 01:14:53.242702 | orchestrator | 2026-03-17 01:14:53.242707 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-17 01:14:53.242712 | orchestrator | Tuesday 17 March 2026 01:11:55 +0000 (0:00:17.009) 0:01:31.791 ********* 2026-03-17 01:14:53.242717 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:53.242722 | orchestrator | 2026-03-17 01:14:53.242728 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-17 01:14:53.242732 | orchestrator | Tuesday 17 March 2026 01:12:00 +0000 (0:00:04.260) 0:01:36.052 ********* 2026-03-17 01:14:53.242742 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:53.242745 | orchestrator | 2026-03-17 01:14:53.242748 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-17 01:14:53.242751 | orchestrator | Tuesday 17 March 2026 01:12:04 +0000 (0:00:04.807) 0:01:40.860 ********* 2026-03-17 01:14:53.242754 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:53.242757 | orchestrator | 2026-03-17 01:14:53.242825 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-17 01:14:53.242839 | orchestrator | Tuesday 17 March 2026 01:12:05 +0000 (0:00:00.210) 0:01:41.070 ********* 2026-03-17 01:14:53.242842 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:14:53.242850 | orchestrator | 2026-03-17 01:14:53.242853 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-17 01:14:53.242856 | orchestrator | Tuesday 17 March 2026 01:12:08 +0000 (0:00:03.386) 0:01:44.456 ********* 2026-03-17 01:14:53.242860 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:14:53.242863 | orchestrator | 2026-03-17 01:14:53.242867 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-17 01:14:53.242870 | orchestrator | Tuesday 17 March 2026 01:12:09 +0000 (0:00:00.946) 0:01:45.403 ********* 2026-03-17 01:14:53.242873 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:14:53.242876 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:53.242891 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:14:53.242895 | orchestrator | 2026-03-17 01:14:53.242898 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-17 01:14:53.242901 | orchestrator | Tuesday 17 March 2026 01:12:14 +0000 (0:00:04.616) 0:01:50.020 ********* 2026-03-17 01:14:53.242904 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:53.242907 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:14:53.242910 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:14:53.242913 | orchestrator | 2026-03-17 01:14:53.242916 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-17 01:14:53.242919 | orchestrator | Tuesday 17 March 2026 01:12:18 +0000 (0:00:04.546) 0:01:54.567 ********* 2026-03-17 01:14:53.242923 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:53.242926 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:14:53.242929 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:14:53.242932 | orchestrator | 2026-03-17 01:14:53.242935 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-17 01:14:53.242938 | orchestrator | Tuesday 17 March 2026 01:12:19 +0000 (0:00:00.759) 0:01:55.327 ********* 2026-03-17 01:14:53.242941 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:14:53.242944 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:14:53.242947 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:14:53.242951 | orchestrator | 2026-03-17 01:14:53.242954 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-17 01:14:53.242957 | orchestrator | Tuesday 17 March 2026 01:12:21 +0000 (0:00:01.781) 0:01:57.108 ********* 2026-03-17 01:14:53.242960 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:14:53.242963 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:53.242966 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:14:53.242969 | orchestrator | 2026-03-17 01:14:53.242976 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-17 01:14:53.242980 | orchestrator | Tuesday 17 March 2026 01:12:22 +0000 (0:00:01.284) 0:01:58.393 ********* 2026-03-17 01:14:53.242983 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:14:53.242986 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:53.242989 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:14:53.242992 | orchestrator | 2026-03-17 01:14:53.242995 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-17 01:14:53.242998 | orchestrator | Tuesday 17 March 2026 01:12:23 +0000 (0:00:01.133) 0:01:59.526 ********* 2026-03-17 01:14:53.243001 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:14:53.243009 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:14:53.243012 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:53.243015 | orchestrator | 2026-03-17 01:14:53.243053 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-17 01:14:53.243066 | orchestrator | Tuesday 17 March 2026 01:12:25 +0000 (0:00:02.131) 0:02:01.658 ********* 2026-03-17 01:14:53.243073 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:14:53.243077 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:53.243082 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:14:53.243088 | orchestrator | 2026-03-17 01:14:53.243093 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-17 01:14:53.243101 | orchestrator | Tuesday 17 March 2026 01:12:27 +0000 (0:00:01.530) 0:02:03.189 ********* 2026-03-17 01:14:53.243108 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:14:53.243113 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:14:53.243119 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:14:53.243124 | orchestrator | 2026-03-17 01:14:53.243129 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-17 01:14:53.243135 | orchestrator | Tuesday 17 March 2026 01:12:27 +0000 (0:00:00.583) 0:02:03.772 ********* 2026-03-17 01:14:53.243140 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:14:53.243146 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:14:53.243152 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:14:53.243157 | orchestrator | 2026-03-17 01:14:53.243162 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-17 01:14:53.243165 | orchestrator | Tuesday 17 March 2026 01:12:31 +0000 (0:00:03.246) 0:02:07.019 ********* 2026-03-17 01:14:53.243168 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:14:53.243172 | orchestrator | 2026-03-17 01:14:53.243175 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-17 01:14:53.243178 | orchestrator | Tuesday 17 March 2026 01:12:31 +0000 (0:00:00.674) 0:02:07.693 ********* 2026-03-17 01:14:53.243181 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:14:53.243184 | orchestrator | 2026-03-17 01:14:53.243187 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-17 01:14:53.243190 | orchestrator | Tuesday 17 March 2026 01:12:35 +0000 (0:00:03.887) 0:02:11.580 ********* 2026-03-17 01:14:53.243193 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:14:53.243197 | orchestrator | 2026-03-17 01:14:53.243201 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-17 01:14:53.243204 | orchestrator | Tuesday 17 March 2026 01:12:38 +0000 (0:00:03.087) 0:02:14.668 ********* 2026-03-17 01:14:53.243208 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-17 01:14:53.243212 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-17 01:14:53.243216 | orchestrator | 2026-03-17 01:14:53.243219 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-17 01:14:53.243223 | orchestrator | Tuesday 17 March 2026 01:12:44 +0000 (0:00:05.826) 0:02:20.495 ********* 2026-03-17 01:14:53.243226 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:14:53.243230 | orchestrator | 2026-03-17 01:14:53.243234 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-17 01:14:53.243237 | orchestrator | Tuesday 17 March 2026 01:12:47 +0000 (0:00:02.825) 0:02:23.320 ********* 2026-03-17 01:14:53.243241 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:14:53.243244 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:14:53.243248 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:14:53.243251 | orchestrator | 2026-03-17 01:14:53.243255 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-17 01:14:53.243258 | orchestrator | Tuesday 17 March 2026 01:12:47 +0000 (0:00:00.396) 0:02:23.716 ********* 2026-03-17 01:14:53.243264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:14:53.243301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:14:53.243309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:14:53.243315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:14:53.243321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:14:53.243327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:14:53.243338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:14:53.243347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:14:53.243364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:14:53.243369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:14:53.243375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:14:53.243381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:14:53.243390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:53.243400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:53.243422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:53.243428 | orchestrator | 2026-03-17 01:14:53.243435 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-17 01:14:53.243441 | orchestrator | Tuesday 17 March 2026 01:12:50 +0000 (0:00:02.577) 0:02:26.294 ********* 2026-03-17 01:14:53.243445 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:53.243449 | orchestrator | 2026-03-17 01:14:53.243452 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-17 01:14:53.243455 | orchestrator | Tuesday 17 March 2026 01:12:50 +0000 (0:00:00.158) 0:02:26.452 ********* 2026-03-17 01:14:53.243458 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:53.243461 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:53.243464 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:53.243467 | orchestrator | 2026-03-17 01:14:53.243470 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-17 01:14:53.243473 | orchestrator | Tuesday 17 March 2026 01:12:51 +0000 (0:00:00.621) 0:02:27.074 ********* 2026-03-17 01:14:53.243478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 01:14:53.243482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:14:53.243490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:14:53.243493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:14:53.243499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:14:53.243502 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:53.243516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 01:14:53.243520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:14:53.243523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:14:53.243532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:14:53.243535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:14:53.243539 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:53.243553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 01:14:53.243557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:14:53.243561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:14:53.243566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:14:53.243569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:14:53.243573 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:53.243576 | orchestrator | 2026-03-17 01:14:53.243579 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-17 01:14:53.243582 | orchestrator | Tuesday 17 March 2026 01:12:51 +0000 (0:00:00.676) 0:02:27.750 ********* 2026-03-17 01:14:53.243585 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:14:53.243589 | orchestrator | 2026-03-17 01:14:53.243592 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-17 01:14:53.243595 | orchestrator | Tuesday 17 March 2026 01:12:52 +0000 (0:00:00.513) 0:02:28.264 ********* 2026-03-17 01:14:53.243600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:14:53.243613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:14:53.243617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:14:53.243623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:14:53.243626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:14:53.243630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:14:53.243635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:14:53.243640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:14:53.243643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:14:53.243648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:14:53.243652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:14:53.243655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:14:53.243660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:53.243668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:53.243671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:53.243677 | orchestrator | 2026-03-17 01:14:53.243680 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-17 01:14:53.243683 | orchestrator | Tuesday 17 March 2026 01:12:57 +0000 (0:00:05.227) 0:02:33.491 ********* 2026-03-17 01:14:53.243687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 01:14:53.243690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:14:53.243693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:14:53.243698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:14:53.243703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:14:53.243707 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:53.243710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 01:14:53.243716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:14:53.243719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:14:53.243722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:14:53.243727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:14:53.243731 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:53.243736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 01:14:53.243743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:14:53.243746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:14:53.243749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:14:53.243752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:14:53.243756 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:53.243759 | orchestrator | 2026-03-17 01:14:53.243779 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-17 01:14:53.243784 | orchestrator | Tuesday 17 March 2026 01:12:58 +0000 (0:00:00.647) 0:02:34.139 ********* 2026-03-17 01:14:53.243791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 01:14:53.243804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:14:53.243808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:14:53.243811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:14:53.243814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:14:53.243818 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:53.243821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 01:14:53.243827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:14:53.243841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:14:53.243849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:14:53.243854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:14:53.243859 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:53.243864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 01:14:53.243869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:14:53.243877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:14:53.243905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:14:53.243911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:14:53.243917 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:53.243922 | orchestrator | 2026-03-17 01:14:53.243928 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-17 01:14:53.243932 | orchestrator | Tuesday 17 March 2026 01:12:59 +0000 (0:00:00.887) 0:02:35.027 ********* 2026-03-17 01:14:53.243935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:14:53.243938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:14:53.243944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:14:53.243952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:14:53.243956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:14:53.243959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:14:53.243962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:14:53.243966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:14:53.243969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:14:53.243977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:14:53.243983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:14:53.243986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:14:53.243989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:53.243993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:53.243996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:53.243999 | orchestrator | 2026-03-17 01:14:53.244005 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-17 01:14:53.244008 | orchestrator | Tuesday 17 March 2026 01:13:03 +0000 (0:00:04.435) 0:02:39.462 ********* 2026-03-17 01:14:53.244012 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-17 01:14:53.244015 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-17 01:14:53.244019 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-17 01:14:53.244022 | orchestrator | 2026-03-17 01:14:53.244025 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-17 01:14:53.244028 | orchestrator | Tuesday 17 March 2026 01:13:05 +0000 (0:00:01.690) 0:02:41.153 ********* 2026-03-17 01:14:53.244036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:14:53.244040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:14:53.244043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:14:53.244047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:14:53.244053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:14:53.244058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:14:53.244064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:14:53.244067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:14:53.244071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:14:53.244074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:14:53.244077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:14:53.244083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:14:53.244088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:53.244095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:53.244098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:53.244106 | orchestrator | 2026-03-17 01:14:53.244109 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-17 01:14:53.244112 | orchestrator | Tuesday 17 March 2026 01:13:22 +0000 (0:00:17.507) 0:02:58.660 ********* 2026-03-17 01:14:53.244115 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:14:53.244118 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:53.244121 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:14:53.244125 | orchestrator | 2026-03-17 01:14:53.244128 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-17 01:14:53.244131 | orchestrator | Tuesday 17 March 2026 01:13:24 +0000 (0:00:01.537) 0:03:00.198 ********* 2026-03-17 01:14:53.244134 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-17 01:14:53.244137 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-17 01:14:53.244140 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-17 01:14:53.244146 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-17 01:14:53.244149 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-17 01:14:53.244152 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-17 01:14:53.244155 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-17 01:14:53.244158 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-17 01:14:53.244162 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-17 01:14:53.244167 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-17 01:14:53.244174 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-17 01:14:53.244181 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-17 01:14:53.244185 | orchestrator | 2026-03-17 01:14:53.244190 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-17 01:14:53.244194 | orchestrator | Tuesday 17 March 2026 01:13:29 +0000 (0:00:05.069) 0:03:05.267 ********* 2026-03-17 01:14:53.244199 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-17 01:14:53.244204 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-17 01:14:53.244209 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-17 01:14:53.244213 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-17 01:14:53.244218 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-17 01:14:53.244223 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-17 01:14:53.244228 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-17 01:14:53.244233 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-17 01:14:53.244237 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-17 01:14:53.244242 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-17 01:14:53.244247 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-17 01:14:53.244253 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-17 01:14:53.244258 | orchestrator | 2026-03-17 01:14:53.244263 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-17 01:14:53.244273 | orchestrator | Tuesday 17 March 2026 01:13:34 +0000 (0:00:05.263) 0:03:10.531 ********* 2026-03-17 01:14:53.244279 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-17 01:14:53.244284 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-17 01:14:53.244291 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-17 01:14:53.244294 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-17 01:14:53.244297 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-17 01:14:53.244300 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-17 01:14:53.244303 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-17 01:14:53.244307 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-17 01:14:53.244313 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-17 01:14:53.244316 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-17 01:14:53.244319 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-17 01:14:53.244322 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-17 01:14:53.244325 | orchestrator | 2026-03-17 01:14:53.244328 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-03-17 01:14:53.244331 | orchestrator | Tuesday 17 March 2026 01:13:40 +0000 (0:00:05.852) 0:03:16.384 ********* 2026-03-17 01:14:53.244335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:14:53.244342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:14:53.244346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:14:53.244351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:14:53.244357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:14:53.244360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:14:53.244367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:14:53.244372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:14:53.244378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:14:53.244385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:14:53.244395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:14:53.244404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:14:53.244414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:53.244419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:53.244423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:14:53.244428 | orchestrator | 2026-03-17 01:14:53.244433 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-17 01:14:53.244438 | orchestrator | Tuesday 17 March 2026 01:13:43 +0000 (0:00:03.464) 0:03:19.848 ********* 2026-03-17 01:14:53.244443 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:14:53.244449 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:14:53.244454 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:14:53.244459 | orchestrator | 2026-03-17 01:14:53.244464 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-17 01:14:53.244469 | orchestrator | Tuesday 17 March 2026 01:13:44 +0000 (0:00:00.326) 0:03:20.174 ********* 2026-03-17 01:14:53.244473 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:53.244476 | orchestrator | 2026-03-17 01:14:53.244479 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-17 01:14:53.244482 | orchestrator | Tuesday 17 March 2026 01:13:46 +0000 (0:00:02.040) 0:03:22.215 ********* 2026-03-17 01:14:53.244485 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:53.244488 | orchestrator | 2026-03-17 01:14:53.244491 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-17 01:14:53.244494 | orchestrator | Tuesday 17 March 2026 01:13:48 +0000 (0:00:01.907) 0:03:24.122 ********* 2026-03-17 01:14:53.244497 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:53.244500 | orchestrator | 2026-03-17 01:14:53.244504 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-17 01:14:53.244507 | orchestrator | Tuesday 17 March 2026 01:13:50 +0000 (0:00:02.183) 0:03:26.306 ********* 2026-03-17 01:14:53.244510 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:53.244513 | orchestrator | 2026-03-17 01:14:53.244516 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-17 01:14:53.244519 | orchestrator | Tuesday 17 March 2026 01:13:53 +0000 (0:00:02.670) 0:03:28.977 ********* 2026-03-17 01:14:53.244522 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:53.244525 | orchestrator | 2026-03-17 01:14:53.244533 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-17 01:14:53.244536 | orchestrator | Tuesday 17 March 2026 01:14:11 +0000 (0:00:18.228) 0:03:47.206 ********* 2026-03-17 01:14:53.244540 | orchestrator | 2026-03-17 01:14:53.244543 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-17 01:14:53.244546 | orchestrator | Tuesday 17 March 2026 01:14:11 +0000 (0:00:00.068) 0:03:47.274 ********* 2026-03-17 01:14:53.244549 | orchestrator | 2026-03-17 01:14:53.244552 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-17 01:14:53.244555 | orchestrator | Tuesday 17 March 2026 01:14:11 +0000 (0:00:00.063) 0:03:47.338 ********* 2026-03-17 01:14:53.244558 | orchestrator | 2026-03-17 01:14:53.244561 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-17 01:14:53.244566 | orchestrator | Tuesday 17 March 2026 01:14:11 +0000 (0:00:00.071) 0:03:47.410 ********* 2026-03-17 01:14:53.244570 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:53.244573 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:14:53.244576 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:14:53.244579 | orchestrator | 2026-03-17 01:14:53.244582 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-17 01:14:53.244585 | orchestrator | Tuesday 17 March 2026 01:14:25 +0000 (0:00:14.175) 0:04:01.586 ********* 2026-03-17 01:14:53.244589 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:53.244592 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:14:53.244595 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:14:53.244598 | orchestrator | 2026-03-17 01:14:53.244601 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-17 01:14:53.244604 | orchestrator | Tuesday 17 March 2026 01:14:36 +0000 (0:00:10.606) 0:04:12.193 ********* 2026-03-17 01:14:53.244607 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:53.244610 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:14:53.244613 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:14:53.244616 | orchestrator | 2026-03-17 01:14:53.244620 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-17 01:14:53.244623 | orchestrator | Tuesday 17 March 2026 01:14:42 +0000 (0:00:05.793) 0:04:17.986 ********* 2026-03-17 01:14:53.244626 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:53.244629 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:14:53.244632 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:14:53.244636 | orchestrator | 2026-03-17 01:14:53.244639 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-17 01:14:53.244642 | orchestrator | Tuesday 17 March 2026 01:14:47 +0000 (0:00:04.919) 0:04:22.905 ********* 2026-03-17 01:14:53.244645 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:14:53.244648 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:14:53.244651 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:14:53.244654 | orchestrator | 2026-03-17 01:14:53.244657 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:14:53.244661 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 01:14:53.244665 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-17 01:14:53.244668 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-17 01:14:53.244671 | orchestrator | 2026-03-17 01:14:53.244674 | orchestrator | 2026-03-17 01:14:53.244677 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:14:53.244680 | orchestrator | Tuesday 17 March 2026 01:14:52 +0000 (0:00:04.969) 0:04:27.875 ********* 2026-03-17 01:14:53.244683 | orchestrator | =============================================================================== 2026-03-17 01:14:53.244686 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 18.23s 2026-03-17 01:14:53.244692 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 17.51s 2026-03-17 01:14:53.244695 | orchestrator | octavia : Add rules for security groups -------------------------------- 17.01s 2026-03-17 01:14:53.244698 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.30s 2026-03-17 01:14:53.244701 | orchestrator | octavia : Restart octavia-api container -------------------------------- 14.18s 2026-03-17 01:14:53.244705 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 10.61s 2026-03-17 01:14:53.244708 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.31s 2026-03-17 01:14:53.244711 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.83s 2026-03-17 01:14:53.244714 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.11s 2026-03-17 01:14:53.244717 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 6.39s 2026-03-17 01:14:53.244720 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.20s 2026-03-17 01:14:53.244723 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.85s 2026-03-17 01:14:53.244726 | orchestrator | octavia : Get security groups for octavia ------------------------------- 5.83s 2026-03-17 01:14:53.244729 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.79s 2026-03-17 01:14:53.244732 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.26s 2026-03-17 01:14:53.244735 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.23s 2026-03-17 01:14:53.244738 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.07s 2026-03-17 01:14:53.244741 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 4.97s 2026-03-17 01:14:53.244746 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 4.92s 2026-03-17 01:14:53.244749 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 4.81s 2026-03-17 01:14:53.244753 | orchestrator | 2026-03-17 01:14:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:14:56.276189 | orchestrator | 2026-03-17 01:14:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:14:59.312840 | orchestrator | 2026-03-17 01:14:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:15:02.342162 | orchestrator | 2026-03-17 01:15:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:15:05.380567 | orchestrator | 2026-03-17 01:15:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:15:08.422163 | orchestrator | 2026-03-17 01:15:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:15:11.465227 | orchestrator | 2026-03-17 01:15:11 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:15:14.503386 | orchestrator | 2026-03-17 01:15:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:15:17.546191 | orchestrator | 2026-03-17 01:15:17 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:15:20.590392 | orchestrator | 2026-03-17 01:15:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:15:23.633139 | orchestrator | 2026-03-17 01:15:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:15:26.662948 | orchestrator | 2026-03-17 01:15:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:15:29.703342 | orchestrator | 2026-03-17 01:15:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:15:32.740508 | orchestrator | 2026-03-17 01:15:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:15:35.784254 | orchestrator | 2026-03-17 01:15:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:15:38.827254 | orchestrator | 2026-03-17 01:15:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:15:41.864736 | orchestrator | 2026-03-17 01:15:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:15:44.907772 | orchestrator | 2026-03-17 01:15:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:15:47.943386 | orchestrator | 2026-03-17 01:15:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:15:50.980077 | orchestrator | 2026-03-17 01:15:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:15:54.020645 | orchestrator | 2026-03-17 01:15:54.353235 | orchestrator | 2026-03-17 01:15:54.357697 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Tue Mar 17 01:15:54 UTC 2026 2026-03-17 01:15:54.357746 | orchestrator | 2026-03-17 01:15:54.698557 | orchestrator | ok: Runtime: 0:32:34.505809 2026-03-17 01:15:54.963165 | 2026-03-17 01:15:54.963320 | TASK [Bootstrap services] 2026-03-17 01:15:55.730381 | orchestrator | 2026-03-17 01:15:55.730478 | orchestrator | # BOOTSTRAP 2026-03-17 01:15:55.730489 | orchestrator | 2026-03-17 01:15:55.730498 | orchestrator | + set -e 2026-03-17 01:15:55.730506 | orchestrator | + echo 2026-03-17 01:15:55.730515 | orchestrator | + echo '# BOOTSTRAP' 2026-03-17 01:15:55.730525 | orchestrator | + echo 2026-03-17 01:15:55.730549 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-17 01:15:55.738933 | orchestrator | + set -e 2026-03-17 01:15:55.739383 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-17 01:15:59.880735 | orchestrator | 2026-03-17 01:15:59 | INFO  | It takes a moment until task 4cae4273-58f8-47b9-93d2-66519eaa267d (flavor-manager) has been started and output is visible here. 2026-03-17 01:16:05.956534 | orchestrator | 2026-03-17 01:16:02 | INFO  | Flavor SCS-1L-1 created 2026-03-17 01:16:05.956621 | orchestrator | 2026-03-17 01:16:02 | INFO  | Flavor SCS-1L-1-5 created 2026-03-17 01:16:05.956637 | orchestrator | 2026-03-17 01:16:02 | INFO  | Flavor SCS-1V-2 created 2026-03-17 01:16:05.956643 | orchestrator | 2026-03-17 01:16:02 | INFO  | Flavor SCS-1V-2-5 created 2026-03-17 01:16:05.956648 | orchestrator | 2026-03-17 01:16:02 | INFO  | Flavor SCS-1V-4 created 2026-03-17 01:16:05.956680 | orchestrator | 2026-03-17 01:16:02 | INFO  | Flavor SCS-1V-4-10 created 2026-03-17 01:16:05.956687 | orchestrator | 2026-03-17 01:16:03 | INFO  | Flavor SCS-1V-8 created 2026-03-17 01:16:05.956692 | orchestrator | 2026-03-17 01:16:03 | INFO  | Flavor SCS-1V-8-20 created 2026-03-17 01:16:05.956702 | orchestrator | 2026-03-17 01:16:03 | INFO  | Flavor SCS-2V-4 created 2026-03-17 01:16:05.956707 | orchestrator | 2026-03-17 01:16:03 | INFO  | Flavor SCS-2V-4-10 created 2026-03-17 01:16:05.956711 | orchestrator | 2026-03-17 01:16:03 | INFO  | Flavor SCS-2V-8 created 2026-03-17 01:16:05.956716 | orchestrator | 2026-03-17 01:16:03 | INFO  | Flavor SCS-2V-8-20 created 2026-03-17 01:16:05.956721 | orchestrator | 2026-03-17 01:16:03 | INFO  | Flavor SCS-2V-16 created 2026-03-17 01:16:05.956725 | orchestrator | 2026-03-17 01:16:03 | INFO  | Flavor SCS-2V-16-50 created 2026-03-17 01:16:05.956730 | orchestrator | 2026-03-17 01:16:04 | INFO  | Flavor SCS-4V-8 created 2026-03-17 01:16:05.956735 | orchestrator | 2026-03-17 01:16:04 | INFO  | Flavor SCS-4V-8-20 created 2026-03-17 01:16:05.956739 | orchestrator | 2026-03-17 01:16:04 | INFO  | Flavor SCS-4V-16 created 2026-03-17 01:16:05.956744 | orchestrator | 2026-03-17 01:16:04 | INFO  | Flavor SCS-4V-16-50 created 2026-03-17 01:16:05.956749 | orchestrator | 2026-03-17 01:16:04 | INFO  | Flavor SCS-4V-32 created 2026-03-17 01:16:05.956753 | orchestrator | 2026-03-17 01:16:04 | INFO  | Flavor SCS-4V-32-100 created 2026-03-17 01:16:05.956758 | orchestrator | 2026-03-17 01:16:04 | INFO  | Flavor SCS-8V-16 created 2026-03-17 01:16:05.956763 | orchestrator | 2026-03-17 01:16:04 | INFO  | Flavor SCS-8V-16-50 created 2026-03-17 01:16:05.956768 | orchestrator | 2026-03-17 01:16:05 | INFO  | Flavor SCS-8V-32 created 2026-03-17 01:16:05.956772 | orchestrator | 2026-03-17 01:16:05 | INFO  | Flavor SCS-8V-32-100 created 2026-03-17 01:16:05.956777 | orchestrator | 2026-03-17 01:16:05 | INFO  | Flavor SCS-16V-32 created 2026-03-17 01:16:05.956782 | orchestrator | 2026-03-17 01:16:05 | INFO  | Flavor SCS-16V-32-100 created 2026-03-17 01:16:05.956786 | orchestrator | 2026-03-17 01:16:05 | INFO  | Flavor SCS-2V-4-20s created 2026-03-17 01:16:05.956791 | orchestrator | 2026-03-17 01:16:05 | INFO  | Flavor SCS-4V-8-50s created 2026-03-17 01:16:05.956796 | orchestrator | 2026-03-17 01:16:05 | INFO  | Flavor SCS-8V-32-100s created 2026-03-17 01:16:08.093869 | orchestrator | 2026-03-17 01:16:08 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-17 01:16:18.194075 | orchestrator | 2026-03-17 01:16:18 | INFO  | Task c1c87156-949d-4e7a-a442-5693ae14348a (bootstrap-basic) was prepared for execution. 2026-03-17 01:16:18.194132 | orchestrator | 2026-03-17 01:16:18 | INFO  | It takes a moment until task c1c87156-949d-4e7a-a442-5693ae14348a (bootstrap-basic) has been started and output is visible here. 2026-03-17 01:17:02.625733 | orchestrator | 2026-03-17 01:17:02.625797 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-17 01:17:02.625806 | orchestrator | 2026-03-17 01:17:02.625811 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-17 01:17:02.625817 | orchestrator | Tuesday 17 March 2026 01:16:22 +0000 (0:00:00.077) 0:00:00.077 ********* 2026-03-17 01:17:02.625823 | orchestrator | ok: [localhost] 2026-03-17 01:17:02.625829 | orchestrator | 2026-03-17 01:17:02.625834 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-17 01:17:02.625839 | orchestrator | Tuesday 17 March 2026 01:16:24 +0000 (0:00:01.802) 0:00:01.879 ********* 2026-03-17 01:17:02.625844 | orchestrator | ok: [localhost] 2026-03-17 01:17:02.625849 | orchestrator | 2026-03-17 01:17:02.625855 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-17 01:17:02.625860 | orchestrator | Tuesday 17 March 2026 01:16:32 +0000 (0:00:08.673) 0:00:10.553 ********* 2026-03-17 01:17:02.625865 | orchestrator | changed: [localhost] 2026-03-17 01:17:02.625871 | orchestrator | 2026-03-17 01:17:02.625876 | orchestrator | TASK [Create public network] *************************************************** 2026-03-17 01:17:02.625881 | orchestrator | Tuesday 17 March 2026 01:16:40 +0000 (0:00:07.203) 0:00:17.756 ********* 2026-03-17 01:17:02.625896 | orchestrator | changed: [localhost] 2026-03-17 01:17:02.625901 | orchestrator | 2026-03-17 01:17:02.625912 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-17 01:17:02.625917 | orchestrator | Tuesday 17 March 2026 01:16:44 +0000 (0:00:04.704) 0:00:22.461 ********* 2026-03-17 01:17:02.625924 | orchestrator | changed: [localhost] 2026-03-17 01:17:02.625930 | orchestrator | 2026-03-17 01:17:02.625935 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-17 01:17:02.625940 | orchestrator | Tuesday 17 March 2026 01:16:51 +0000 (0:00:06.651) 0:00:29.112 ********* 2026-03-17 01:17:02.625945 | orchestrator | changed: [localhost] 2026-03-17 01:17:02.625951 | orchestrator | 2026-03-17 01:17:02.625956 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-17 01:17:02.625961 | orchestrator | Tuesday 17 March 2026 01:16:55 +0000 (0:00:04.117) 0:00:33.230 ********* 2026-03-17 01:17:02.625966 | orchestrator | changed: [localhost] 2026-03-17 01:17:02.625971 | orchestrator | 2026-03-17 01:17:02.625977 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-17 01:17:02.625987 | orchestrator | Tuesday 17 March 2026 01:16:59 +0000 (0:00:03.552) 0:00:36.783 ********* 2026-03-17 01:17:02.625992 | orchestrator | ok: [localhost] 2026-03-17 01:17:02.625997 | orchestrator | 2026-03-17 01:17:02.626003 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:17:02.626008 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:17:02.626045 | orchestrator | 2026-03-17 01:17:02.626051 | orchestrator | 2026-03-17 01:17:02.626056 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:17:02.626061 | orchestrator | Tuesday 17 March 2026 01:17:02 +0000 (0:00:03.311) 0:00:40.094 ********* 2026-03-17 01:17:02.626067 | orchestrator | =============================================================================== 2026-03-17 01:17:02.626072 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.67s 2026-03-17 01:17:02.626077 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.20s 2026-03-17 01:17:02.626123 | orchestrator | Set public network to default ------------------------------------------- 6.65s 2026-03-17 01:17:02.626133 | orchestrator | Create public network --------------------------------------------------- 4.70s 2026-03-17 01:17:02.626160 | orchestrator | Create public subnet ---------------------------------------------------- 4.12s 2026-03-17 01:17:02.626169 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.55s 2026-03-17 01:17:02.626178 | orchestrator | Create manager role ----------------------------------------------------- 3.31s 2026-03-17 01:17:02.626187 | orchestrator | Gathering Facts --------------------------------------------------------- 1.80s 2026-03-17 01:17:04.885161 | orchestrator | 2026-03-17 01:17:04 | INFO  | It takes a moment until task 9003a12e-9d42-4db2-8509-f78bf75be802 (image-manager) has been started and output is visible here. 2026-03-17 01:17:45.638239 | orchestrator | 2026-03-17 01:17:07 | INFO  | Processing image 'Cirros 0.6.2' 2026-03-17 01:17:45.638289 | orchestrator | 2026-03-17 01:17:07 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-03-17 01:17:45.638295 | orchestrator | 2026-03-17 01:17:07 | INFO  | Importing image Cirros 0.6.2 2026-03-17 01:17:45.638299 | orchestrator | 2026-03-17 01:17:07 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-17 01:17:45.638303 | orchestrator | 2026-03-17 01:17:09 | INFO  | Waiting for image to leave queued state... 2026-03-17 01:17:45.638307 | orchestrator | 2026-03-17 01:17:11 | INFO  | Waiting for import to complete... 2026-03-17 01:17:45.638310 | orchestrator | 2026-03-17 01:17:22 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-03-17 01:17:45.638314 | orchestrator | 2026-03-17 01:17:22 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-03-17 01:17:45.638317 | orchestrator | 2026-03-17 01:17:22 | INFO  | Setting internal_version = 0.6.2 2026-03-17 01:17:45.638321 | orchestrator | 2026-03-17 01:17:22 | INFO  | Setting image_original_user = cirros 2026-03-17 01:17:45.638325 | orchestrator | 2026-03-17 01:17:22 | INFO  | Adding tag os:cirros 2026-03-17 01:17:45.638328 | orchestrator | 2026-03-17 01:17:22 | INFO  | Setting property architecture: x86_64 2026-03-17 01:17:45.638331 | orchestrator | 2026-03-17 01:17:22 | INFO  | Setting property hw_disk_bus: scsi 2026-03-17 01:17:45.638334 | orchestrator | 2026-03-17 01:17:23 | INFO  | Setting property hw_rng_model: virtio 2026-03-17 01:17:45.638338 | orchestrator | 2026-03-17 01:17:23 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-17 01:17:45.638341 | orchestrator | 2026-03-17 01:17:23 | INFO  | Setting property hw_watchdog_action: reset 2026-03-17 01:17:45.638344 | orchestrator | 2026-03-17 01:17:23 | INFO  | Setting property hypervisor_type: qemu 2026-03-17 01:17:45.638348 | orchestrator | 2026-03-17 01:17:23 | INFO  | Setting property os_distro: cirros 2026-03-17 01:17:45.638351 | orchestrator | 2026-03-17 01:17:23 | INFO  | Setting property os_purpose: minimal 2026-03-17 01:17:45.638354 | orchestrator | 2026-03-17 01:17:24 | INFO  | Setting property replace_frequency: never 2026-03-17 01:17:45.638357 | orchestrator | 2026-03-17 01:17:24 | INFO  | Setting property uuid_validity: none 2026-03-17 01:17:45.638366 | orchestrator | 2026-03-17 01:17:24 | INFO  | Setting property provided_until: none 2026-03-17 01:17:45.638369 | orchestrator | 2026-03-17 01:17:24 | INFO  | Setting property image_description: Cirros 2026-03-17 01:17:45.638372 | orchestrator | 2026-03-17 01:17:24 | INFO  | Setting property image_name: Cirros 2026-03-17 01:17:45.638375 | orchestrator | 2026-03-17 01:17:25 | INFO  | Setting property internal_version: 0.6.2 2026-03-17 01:17:45.638378 | orchestrator | 2026-03-17 01:17:25 | INFO  | Setting property image_original_user: cirros 2026-03-17 01:17:45.638391 | orchestrator | 2026-03-17 01:17:25 | INFO  | Setting property os_version: 0.6.2 2026-03-17 01:17:45.638397 | orchestrator | 2026-03-17 01:17:25 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-17 01:17:45.638401 | orchestrator | 2026-03-17 01:17:25 | INFO  | Setting property image_build_date: 2023-05-30 2026-03-17 01:17:45.638404 | orchestrator | 2026-03-17 01:17:26 | INFO  | Checking status of 'Cirros 0.6.2' 2026-03-17 01:17:45.638407 | orchestrator | 2026-03-17 01:17:26 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-03-17 01:17:45.638411 | orchestrator | 2026-03-17 01:17:26 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-03-17 01:17:45.638414 | orchestrator | 2026-03-17 01:17:26 | INFO  | Processing image 'Cirros 0.6.3' 2026-03-17 01:17:45.638419 | orchestrator | 2026-03-17 01:17:26 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-03-17 01:17:45.638422 | orchestrator | 2026-03-17 01:17:26 | INFO  | Importing image Cirros 0.6.3 2026-03-17 01:17:45.638425 | orchestrator | 2026-03-17 01:17:26 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-17 01:17:45.638429 | orchestrator | 2026-03-17 01:17:28 | INFO  | Waiting for image to leave queued state... 2026-03-17 01:17:45.638432 | orchestrator | 2026-03-17 01:17:30 | INFO  | Waiting for import to complete... 2026-03-17 01:17:45.638442 | orchestrator | 2026-03-17 01:17:40 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-03-17 01:17:45.638445 | orchestrator | 2026-03-17 01:17:41 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-03-17 01:17:45.638449 | orchestrator | 2026-03-17 01:17:41 | INFO  | Setting internal_version = 0.6.3 2026-03-17 01:17:45.638452 | orchestrator | 2026-03-17 01:17:41 | INFO  | Setting image_original_user = cirros 2026-03-17 01:17:45.638455 | orchestrator | 2026-03-17 01:17:41 | INFO  | Adding tag os:cirros 2026-03-17 01:17:45.638458 | orchestrator | 2026-03-17 01:17:41 | INFO  | Setting property architecture: x86_64 2026-03-17 01:17:45.638461 | orchestrator | 2026-03-17 01:17:41 | INFO  | Setting property hw_disk_bus: scsi 2026-03-17 01:17:45.638464 | orchestrator | 2026-03-17 01:17:42 | INFO  | Setting property hw_rng_model: virtio 2026-03-17 01:17:45.638467 | orchestrator | 2026-03-17 01:17:42 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-17 01:17:45.638470 | orchestrator | 2026-03-17 01:17:42 | INFO  | Setting property hw_watchdog_action: reset 2026-03-17 01:17:45.638473 | orchestrator | 2026-03-17 01:17:42 | INFO  | Setting property hypervisor_type: qemu 2026-03-17 01:17:45.638476 | orchestrator | 2026-03-17 01:17:42 | INFO  | Setting property os_distro: cirros 2026-03-17 01:17:45.638480 | orchestrator | 2026-03-17 01:17:42 | INFO  | Setting property os_purpose: minimal 2026-03-17 01:17:45.638483 | orchestrator | 2026-03-17 01:17:43 | INFO  | Setting property replace_frequency: never 2026-03-17 01:17:45.638486 | orchestrator | 2026-03-17 01:17:43 | INFO  | Setting property uuid_validity: none 2026-03-17 01:17:45.638489 | orchestrator | 2026-03-17 01:17:43 | INFO  | Setting property provided_until: none 2026-03-17 01:17:45.638492 | orchestrator | 2026-03-17 01:17:43 | INFO  | Setting property image_description: Cirros 2026-03-17 01:17:45.638495 | orchestrator | 2026-03-17 01:17:43 | INFO  | Setting property image_name: Cirros 2026-03-17 01:17:45.638498 | orchestrator | 2026-03-17 01:17:44 | INFO  | Setting property internal_version: 0.6.3 2026-03-17 01:17:45.638528 | orchestrator | 2026-03-17 01:17:44 | INFO  | Setting property image_original_user: cirros 2026-03-17 01:17:45.638532 | orchestrator | 2026-03-17 01:17:44 | INFO  | Setting property os_version: 0.6.3 2026-03-17 01:17:45.638535 | orchestrator | 2026-03-17 01:17:44 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-17 01:17:45.638539 | orchestrator | 2026-03-17 01:17:44 | INFO  | Setting property image_build_date: 2024-09-26 2026-03-17 01:17:45.638542 | orchestrator | 2026-03-17 01:17:44 | INFO  | Checking status of 'Cirros 0.6.3' 2026-03-17 01:17:45.638545 | orchestrator | 2026-03-17 01:17:44 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-03-17 01:17:45.638548 | orchestrator | 2026-03-17 01:17:44 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-03-17 01:17:45.949132 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-03-17 01:17:48.467629 | orchestrator | 2026-03-17 01:17:48 | INFO  | date: 2026-03-16 2026-03-17 01:17:48.467687 | orchestrator | 2026-03-17 01:17:48 | INFO  | image: octavia-amphora-haproxy-2024.2.20260316.qcow2 2026-03-17 01:17:48.467708 | orchestrator | 2026-03-17 01:17:48 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260316.qcow2 2026-03-17 01:17:48.467717 | orchestrator | 2026-03-17 01:17:48 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260316.qcow2.CHECKSUM 2026-03-17 01:17:48.834548 | orchestrator | 2026-03-17 01:17:48 | INFO  | checksum: be12c9016fe82cfba981ee6d08be3116e821cd229b6d07cee651ecc6c4a84c1a 2026-03-17 01:17:48.901378 | orchestrator | 2026-03-17 01:17:48 | INFO  | It takes a moment until task 214d18da-f6da-4633-9632-38d5f19b3593 (image-manager) has been started and output is visible here. 2026-03-17 01:18:49.583342 | orchestrator | 2026-03-17 01:17:50 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-03-16' 2026-03-17 01:18:49.583398 | orchestrator | 2026-03-17 01:17:51 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260316.qcow2: 200 2026-03-17 01:18:49.583405 | orchestrator | 2026-03-17 01:17:51 | INFO  | Importing image OpenStack Octavia Amphora 2026-03-16 2026-03-17 01:18:49.583434 | orchestrator | 2026-03-17 01:17:51 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260316.qcow2 2026-03-17 01:18:49.583439 | orchestrator | 2026-03-17 01:17:52 | INFO  | Waiting for image to leave queued state... 2026-03-17 01:18:49.583443 | orchestrator | 2026-03-17 01:17:54 | INFO  | Waiting for import to complete... 2026-03-17 01:18:49.583447 | orchestrator | 2026-03-17 01:18:04 | INFO  | Waiting for import to complete... 2026-03-17 01:18:49.583451 | orchestrator | 2026-03-17 01:18:15 | INFO  | Waiting for import to complete... 2026-03-17 01:18:49.583455 | orchestrator | 2026-03-17 01:18:25 | INFO  | Waiting for import to complete... 2026-03-17 01:18:49.583460 | orchestrator | 2026-03-17 01:18:35 | INFO  | Waiting for import to complete... 2026-03-17 01:18:49.583464 | orchestrator | 2026-03-17 01:18:45 | INFO  | Import of 'OpenStack Octavia Amphora 2026-03-16' successfully completed, reloading images 2026-03-17 01:18:49.583469 | orchestrator | 2026-03-17 01:18:45 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-03-16' 2026-03-17 01:18:49.583473 | orchestrator | 2026-03-17 01:18:45 | INFO  | Setting internal_version = 2026-03-16 2026-03-17 01:18:49.583488 | orchestrator | 2026-03-17 01:18:45 | INFO  | Setting image_original_user = ubuntu 2026-03-17 01:18:49.583492 | orchestrator | 2026-03-17 01:18:45 | INFO  | Adding tag amphora 2026-03-17 01:18:49.583496 | orchestrator | 2026-03-17 01:18:45 | INFO  | Adding tag os:ubuntu 2026-03-17 01:18:49.583500 | orchestrator | 2026-03-17 01:18:45 | INFO  | Setting property architecture: x86_64 2026-03-17 01:18:49.583504 | orchestrator | 2026-03-17 01:18:46 | INFO  | Setting property hw_disk_bus: scsi 2026-03-17 01:18:49.583508 | orchestrator | 2026-03-17 01:18:46 | INFO  | Setting property hw_rng_model: virtio 2026-03-17 01:18:49.583512 | orchestrator | 2026-03-17 01:18:46 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-17 01:18:49.583515 | orchestrator | 2026-03-17 01:18:46 | INFO  | Setting property hw_watchdog_action: reset 2026-03-17 01:18:49.583519 | orchestrator | 2026-03-17 01:18:46 | INFO  | Setting property hypervisor_type: qemu 2026-03-17 01:18:49.583523 | orchestrator | 2026-03-17 01:18:47 | INFO  | Setting property os_distro: ubuntu 2026-03-17 01:18:49.583527 | orchestrator | 2026-03-17 01:18:47 | INFO  | Setting property replace_frequency: quarterly 2026-03-17 01:18:49.583531 | orchestrator | 2026-03-17 01:18:47 | INFO  | Setting property uuid_validity: last-1 2026-03-17 01:18:49.583534 | orchestrator | 2026-03-17 01:18:47 | INFO  | Setting property provided_until: none 2026-03-17 01:18:49.583543 | orchestrator | 2026-03-17 01:18:47 | INFO  | Setting property os_purpose: network 2026-03-17 01:18:49.583552 | orchestrator | 2026-03-17 01:18:47 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-03-17 01:18:49.583563 | orchestrator | 2026-03-17 01:18:48 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-03-17 01:18:49.583567 | orchestrator | 2026-03-17 01:18:48 | INFO  | Setting property internal_version: 2026-03-16 2026-03-17 01:18:49.583571 | orchestrator | 2026-03-17 01:18:48 | INFO  | Setting property image_original_user: ubuntu 2026-03-17 01:18:49.583574 | orchestrator | 2026-03-17 01:18:48 | INFO  | Setting property os_version: 2026-03-16 2026-03-17 01:18:49.583578 | orchestrator | 2026-03-17 01:18:48 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260316.qcow2 2026-03-17 01:18:49.583582 | orchestrator | 2026-03-17 01:18:49 | INFO  | Setting property image_build_date: 2026-03-16 2026-03-17 01:18:49.583586 | orchestrator | 2026-03-17 01:18:49 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-03-16' 2026-03-17 01:18:49.583590 | orchestrator | 2026-03-17 01:18:49 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-03-16' 2026-03-17 01:18:49.583594 | orchestrator | 2026-03-17 01:18:49 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-03-17 01:18:49.583605 | orchestrator | 2026-03-17 01:18:49 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-03-17 01:18:49.583609 | orchestrator | 2026-03-17 01:18:49 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-03-17 01:18:49.583613 | orchestrator | 2026-03-17 01:18:49 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-03-17 01:18:50.090276 | orchestrator | ok: Runtime: 0:02:54.430421 2026-03-17 01:18:50.114798 | 2026-03-17 01:18:50.115025 | TASK [Run checks] 2026-03-17 01:18:50.814531 | orchestrator | + set -e 2026-03-17 01:18:50.814659 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-17 01:18:50.814670 | orchestrator | ++ export INTERACTIVE=false 2026-03-17 01:18:50.814681 | orchestrator | ++ INTERACTIVE=false 2026-03-17 01:18:50.814688 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-17 01:18:50.814694 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-17 01:18:50.814709 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-17 01:18:50.815726 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-17 01:18:50.820249 | orchestrator | 2026-03-17 01:18:50.820307 | orchestrator | # CHECK 2026-03-17 01:18:50.820313 | orchestrator | 2026-03-17 01:18:50.820318 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-17 01:18:50.820325 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-17 01:18:50.820329 | orchestrator | + echo 2026-03-17 01:18:50.820333 | orchestrator | + echo '# CHECK' 2026-03-17 01:18:50.820337 | orchestrator | + echo 2026-03-17 01:18:50.820343 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-17 01:18:50.821382 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-17 01:18:50.880078 | orchestrator | 2026-03-17 01:18:50.880126 | orchestrator | ## Containers @ testbed-manager 2026-03-17 01:18:50.880132 | orchestrator | 2026-03-17 01:18:50.880139 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-17 01:18:50.880143 | orchestrator | + echo 2026-03-17 01:18:50.880148 | orchestrator | + echo '## Containers @ testbed-manager' 2026-03-17 01:18:50.880153 | orchestrator | + echo 2026-03-17 01:18:50.880157 | orchestrator | + osism container testbed-manager ps 2026-03-17 01:18:52.550172 | orchestrator | 2026-03-17 01:18:52 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-03-17 01:18:52.936370 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-17 01:18:52.936442 | orchestrator | aa35f8e37835 registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_blackbox_exporter 2026-03-17 01:18:52.936453 | orchestrator | 55e2f950f8d4 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_alertmanager 2026-03-17 01:18:52.936460 | orchestrator | 5825079c3f1b registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_cadvisor 2026-03-17 01:18:52.936465 | orchestrator | 30c2270f58bd registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2026-03-17 01:18:52.936468 | orchestrator | 5d6e1b258883 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_server 2026-03-17 01:18:52.936475 | orchestrator | 679622d40ac6 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 16 minutes ago Up 16 minutes cephclient 2026-03-17 01:18:52.936480 | orchestrator | 1c98944bb7e5 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2026-03-17 01:18:52.936541 | orchestrator | d0b40738f089 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2026-03-17 01:18:52.936559 | orchestrator | 7f6fe8444122 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 29 minutes ago Up 28 minutes (healthy) 80/tcp phpmyadmin 2026-03-17 01:18:52.937001 | orchestrator | b7e8074d2b31 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2026-03-17 01:18:52.937018 | orchestrator | c5e2370f42da registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 30 minutes ago Up 29 minutes openstackclient 2026-03-17 01:18:52.937189 | orchestrator | 55ba9e4a93e9 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 30 minutes ago Up 29 minutes (healthy) 8080/tcp homer 2026-03-17 01:18:52.937201 | orchestrator | 64a855875690 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 53 minutes ago Up 52 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2026-03-17 01:18:52.937212 | orchestrator | e10afc68ec41 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" 57 minutes ago Up 36 minutes (healthy) manager-inventory_reconciler-1 2026-03-17 01:18:52.937220 | orchestrator | 1e4b1d1b0355 registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" 57 minutes ago Up 36 minutes (healthy) kolla-ansible 2026-03-17 01:18:52.937226 | orchestrator | 1020d324a784 registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" 57 minutes ago Up 36 minutes (healthy) osism-kubernetes 2026-03-17 01:18:52.937642 | orchestrator | 9423185f3fa3 registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" 57 minutes ago Up 36 minutes (healthy) ceph-ansible 2026-03-17 01:18:52.937663 | orchestrator | 49d3f583a5e2 registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" 57 minutes ago Up 36 minutes (healthy) osism-ansible 2026-03-17 01:18:52.937717 | orchestrator | 5efc41e65710 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 57 minutes ago Up 36 minutes (healthy) 8000/tcp manager-ara-server-1 2026-03-17 01:18:52.937724 | orchestrator | 13c750d513d2 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 57 minutes ago Up 37 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-03-17 01:18:52.938011 | orchestrator | ac2e89326591 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 57 minutes ago Up 37 minutes (healthy) manager-listener-1 2026-03-17 01:18:52.938031 | orchestrator | 0bb23d092e66 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 57 minutes ago Up 37 minutes (healthy) 6379/tcp manager-redis-1 2026-03-17 01:18:52.938042 | orchestrator | 60a6ff529576 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 57 minutes ago Up 37 minutes (healthy) manager-openstack-1 2026-03-17 01:18:52.938869 | orchestrator | 78b767926bb1 registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" 57 minutes ago Up 37 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2026-03-17 01:18:52.938900 | orchestrator | 7a7216d664ef registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 57 minutes ago Up 37 minutes (healthy) manager-beat-1 2026-03-17 01:18:52.938906 | orchestrator | 3d85c4b3b17d registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 57 minutes ago Up 37 minutes (healthy) manager-flower-1 2026-03-17 01:18:52.938911 | orchestrator | e2daabfc4a33 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" 57 minutes ago Up 37 minutes (healthy) osismclient 2026-03-17 01:18:52.938915 | orchestrator | 45d75788bcf6 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 57 minutes ago Up 37 minutes (healthy) 3306/tcp manager-mariadb-1 2026-03-17 01:18:52.938919 | orchestrator | d372148188f1 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 58 minutes ago Up 58 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-03-17 01:18:53.141270 | orchestrator | 2026-03-17 01:18:53.141331 | orchestrator | ## Images @ testbed-manager 2026-03-17 01:18:53.141340 | orchestrator | 2026-03-17 01:18:53.141348 | orchestrator | + echo 2026-03-17 01:18:53.141355 | orchestrator | + echo '## Images @ testbed-manager' 2026-03-17 01:18:53.141364 | orchestrator | + echo 2026-03-17 01:18:53.141371 | orchestrator | + osism container testbed-manager images 2026-03-17 01:18:55.225219 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-17 01:18:55.225270 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 9f242ea2af99 45 hours ago 239MB 2026-03-17 01:18:55.225276 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 6 weeks ago 41.4MB 2026-03-17 01:18:55.225290 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 3 months ago 11.5MB 2026-03-17 01:18:55.225295 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 3 months ago 608MB 2026-03-17 01:18:55.225300 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-17 01:18:55.225304 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-17 01:18:55.225309 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-17 01:18:55.225313 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 3 months ago 308MB 2026-03-17 01:18:55.225318 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-17 01:18:55.225322 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 3 months ago 404MB 2026-03-17 01:18:55.225327 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 3 months ago 839MB 2026-03-17 01:18:55.225340 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-17 01:18:55.225345 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 3 months ago 330MB 2026-03-17 01:18:55.225350 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 3 months ago 613MB 2026-03-17 01:18:55.225354 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 3 months ago 560MB 2026-03-17 01:18:55.225359 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 3 months ago 1.23GB 2026-03-17 01:18:55.225363 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 3 months ago 383MB 2026-03-17 01:18:55.225368 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 3 months ago 238MB 2026-03-17 01:18:55.225372 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-03-17 01:18:55.225377 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 5 months ago 742MB 2026-03-17 01:18:55.225381 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 6 months ago 275MB 2026-03-17 01:18:55.225386 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 7 months ago 226MB 2026-03-17 01:18:55.225390 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 10 months ago 453MB 2026-03-17 01:18:55.225395 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 21 months ago 146MB 2026-03-17 01:18:55.441207 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-17 01:18:55.441333 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-17 01:18:55.470838 | orchestrator | 2026-03-17 01:18:55.470887 | orchestrator | ## Containers @ testbed-node-0 2026-03-17 01:18:55.470894 | orchestrator | 2026-03-17 01:18:55.470898 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-17 01:18:55.470904 | orchestrator | + echo 2026-03-17 01:18:55.470909 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-03-17 01:18:55.470914 | orchestrator | + echo 2026-03-17 01:18:55.470920 | orchestrator | + osism container testbed-node-0 ps 2026-03-17 01:18:57.514588 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-17 01:18:57.514654 | orchestrator | af8044c28801 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-03-17 01:18:57.514664 | orchestrator | 23600a879dd9 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-03-17 01:18:57.514672 | orchestrator | c219bc4af992 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-03-17 01:18:57.514679 | orchestrator | 5f9927e35197 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-03-17 01:18:57.514687 | orchestrator | 4058a81328c5 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-03-17 01:18:57.514705 | orchestrator | bf32a3c35ff4 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2026-03-17 01:18:57.514723 | orchestrator | 35f325df5038 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2026-03-17 01:18:57.514730 | orchestrator | d0c21bf670d1 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2026-03-17 01:18:57.514736 | orchestrator | 3d76669fb118 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-03-17 01:18:57.514743 | orchestrator | 42138fc41b5f registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2026-03-17 01:18:57.514750 | orchestrator | 4f43ff68fab9 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_backup 2026-03-17 01:18:57.514757 | orchestrator | 8c80be2c1809 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_volume 2026-03-17 01:18:57.514764 | orchestrator | 907844162b38 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_scheduler 2026-03-17 01:18:57.514771 | orchestrator | 8d77a2d0134b registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) glance_api 2026-03-17 01:18:57.514778 | orchestrator | a16c06d9327a registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2026-03-17 01:18:57.514785 | orchestrator | 17559365af3b registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_elasticsearch_exporter 2026-03-17 01:18:57.514793 | orchestrator | 30408e0792a2 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_cadvisor 2026-03-17 01:18:57.514800 | orchestrator | 2ffdbbedef63 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_memcached_exporter 2026-03-17 01:18:57.514806 | orchestrator | 10ed066452d0 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 12 minutes ago Up 11 minutes prometheus_mysqld_exporter 2026-03-17 01:18:57.514823 | orchestrator | ee67e34ee616 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2026-03-17 01:18:57.514830 | orchestrator | 37c6ed4b1b95 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) magnum_conductor 2026-03-17 01:18:57.514837 | orchestrator | ee97e72623d2 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2026-03-17 01:18:57.514843 | orchestrator | 78d9224f7b57 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) neutron_server 2026-03-17 01:18:57.514850 | orchestrator | ff4f31cf4071 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_worker 2026-03-17 01:18:57.514863 | orchestrator | dc89ed7dfc5d registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) placement_api 2026-03-17 01:18:57.514870 | orchestrator | 12fefdfb58f9 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_mdns 2026-03-17 01:18:57.514877 | orchestrator | a44d000c6be0 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_producer 2026-03-17 01:18:57.514886 | orchestrator | 38837a738a45 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_central 2026-03-17 01:18:57.514893 | orchestrator | acbaa3f62a89 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_api 2026-03-17 01:18:57.514900 | orchestrator | 53f537407a3a registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) barbican_worker 2026-03-17 01:18:57.514907 | orchestrator | c0ee6bd0ed4a registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_backend_bind9 2026-03-17 01:18:57.514913 | orchestrator | df25e3a1e333 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 14 minutes (healthy) barbican_keystone_listener 2026-03-17 01:18:57.514920 | orchestrator | 78555be6a98a registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-0 2026-03-17 01:18:57.514926 | orchestrator | 5c3d6c5bdac2 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_api 2026-03-17 01:18:57.514933 | orchestrator | d3ce7b472740 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2026-03-17 01:18:57.514941 | orchestrator | d13b8c8903c2 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 18 minutes ago Up 17 minutes (healthy) keystone_fernet 2026-03-17 01:18:57.514948 | orchestrator | bbf0572fa5ee registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2026-03-17 01:18:57.514955 | orchestrator | 257b8e9d1b36 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2026-03-17 01:18:57.514961 | orchestrator | aa7ab05418db registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2026-03-17 01:18:57.514967 | orchestrator | 84022fd857e1 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2026-03-17 01:18:57.514977 | orchestrator | c88ffecdd8cf registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2026-03-17 01:18:57.514984 | orchestrator | 6aa44281b12d registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-0 2026-03-17 01:18:57.514995 | orchestrator | 777fcd21df4a registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2026-03-17 01:18:57.515002 | orchestrator | 16bd157f723e registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2026-03-17 01:18:57.515008 | orchestrator | 054627465e2b registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2026-03-17 01:18:57.515015 | orchestrator | 6510a0798c2d registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_northd 2026-03-17 01:18:57.515021 | orchestrator | 99310d9e2462 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_sb_db 2026-03-17 01:18:57.515028 | orchestrator | d7331ccf00cd registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_nb_db 2026-03-17 01:18:57.515035 | orchestrator | 9e9f78ec1fb9 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-0 2026-03-17 01:18:57.515042 | orchestrator | b7f17b196ff9 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2026-03-17 01:18:57.515049 | orchestrator | b91573a8f644 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2026-03-17 01:18:57.515055 | orchestrator | a54380241df1 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2026-03-17 01:18:57.515062 | orchestrator | 2aa32a49c74d registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2026-03-17 01:18:57.515069 | orchestrator | 00bd43c3079f registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2026-03-17 01:18:57.515075 | orchestrator | 5fa75290237d registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis 2026-03-17 01:18:57.515082 | orchestrator | 85e7f0d0d6f7 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2026-03-17 01:18:57.515089 | orchestrator | 92a46d605898 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2026-03-17 01:18:57.515095 | orchestrator | 9465d1bb9289 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2026-03-17 01:18:57.515101 | orchestrator | 26e780b3b45c registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2026-03-17 01:18:57.718868 | orchestrator | 2026-03-17 01:18:57.718935 | orchestrator | ## Images @ testbed-node-0 2026-03-17 01:18:57.718944 | orchestrator | 2026-03-17 01:18:57.718951 | orchestrator | + echo 2026-03-17 01:18:57.718957 | orchestrator | + echo '## Images @ testbed-node-0' 2026-03-17 01:18:57.718964 | orchestrator | + echo 2026-03-17 01:18:57.718970 | orchestrator | + osism container testbed-node-0 images 2026-03-17 01:18:59.808335 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-17 01:18:59.808407 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 3 months ago 322MB 2026-03-17 01:18:59.808416 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 3 months ago 266MB 2026-03-17 01:18:59.808422 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 3 months ago 1.56GB 2026-03-17 01:18:59.808433 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 3 months ago 1.53GB 2026-03-17 01:18:59.808445 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 3 months ago 276MB 2026-03-17 01:18:59.808451 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-17 01:18:59.808456 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-17 01:18:59.808461 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 3 months ago 1.02GB 2026-03-17 01:18:59.808466 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 3 months ago 412MB 2026-03-17 01:18:59.808472 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 3 months ago 274MB 2026-03-17 01:18:59.808477 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-17 01:18:59.808482 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 3 months ago 273MB 2026-03-17 01:18:59.808487 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 3 months ago 273MB 2026-03-17 01:18:59.808492 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 3 months ago 452MB 2026-03-17 01:18:59.808497 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 3 months ago 1.15GB 2026-03-17 01:18:59.808502 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 3 months ago 301MB 2026-03-17 01:18:59.808507 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 3 months ago 298MB 2026-03-17 01:18:59.808512 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-17 01:18:59.808517 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 3 months ago 292MB 2026-03-17 01:18:59.808522 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-17 01:18:59.808527 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 3 months ago 279MB 2026-03-17 01:18:59.808532 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 3 months ago 975MB 2026-03-17 01:18:59.808537 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 3 months ago 279MB 2026-03-17 01:18:59.808542 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 3 months ago 1.37GB 2026-03-17 01:18:59.808549 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 3 months ago 1.21GB 2026-03-17 01:18:59.808557 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 3 months ago 1.21GB 2026-03-17 01:18:59.808582 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 3 months ago 1.21GB 2026-03-17 01:18:59.808592 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 3 months ago 976MB 2026-03-17 01:18:59.808600 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 3 months ago 976MB 2026-03-17 01:18:59.808613 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 3 months ago 1.13GB 2026-03-17 01:18:59.808621 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 3 months ago 1.24GB 2026-03-17 01:18:59.808642 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 3 months ago 974MB 2026-03-17 01:18:59.808651 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 3 months ago 974MB 2026-03-17 01:18:59.808660 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 3 months ago 974MB 2026-03-17 01:18:59.808668 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 3 months ago 973MB 2026-03-17 01:18:59.808676 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 3 months ago 991MB 2026-03-17 01:18:59.808685 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 3 months ago 991MB 2026-03-17 01:18:59.808693 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 3 months ago 990MB 2026-03-17 01:18:59.808702 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 3 months ago 1.09GB 2026-03-17 01:18:59.808711 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 3 months ago 1.04GB 2026-03-17 01:18:59.808719 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 3 months ago 1.04GB 2026-03-17 01:18:59.808727 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 3 months ago 1.03GB 2026-03-17 01:18:59.808735 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 3 months ago 1.03GB 2026-03-17 01:18:59.808744 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 3 months ago 1.05GB 2026-03-17 01:18:59.808751 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 3 months ago 1.03GB 2026-03-17 01:18:59.808759 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 3 months ago 1.05GB 2026-03-17 01:18:59.808766 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 3 months ago 1.16GB 2026-03-17 01:18:59.808774 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 3 months ago 1.1GB 2026-03-17 01:18:59.808783 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 3 months ago 983MB 2026-03-17 01:18:59.808792 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 3 months ago 989MB 2026-03-17 01:18:59.808800 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 3 months ago 984MB 2026-03-17 01:18:59.808808 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 3 months ago 984MB 2026-03-17 01:18:59.808820 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 3 months ago 989MB 2026-03-17 01:18:59.808826 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 3 months ago 984MB 2026-03-17 01:18:59.808831 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 3 months ago 1.05GB 2026-03-17 01:18:59.808836 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 3 months ago 990MB 2026-03-17 01:18:59.808840 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 3 months ago 1.72GB 2026-03-17 01:18:59.808849 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 3 months ago 1.4GB 2026-03-17 01:18:59.808854 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 3 months ago 1.41GB 2026-03-17 01:18:59.808859 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 3 months ago 1.4GB 2026-03-17 01:18:59.808864 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 3 months ago 840MB 2026-03-17 01:18:59.808869 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 3 months ago 840MB 2026-03-17 01:18:59.808874 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 3 months ago 840MB 2026-03-17 01:18:59.808884 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 3 months ago 840MB 2026-03-17 01:18:59.808889 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-17 01:19:00.005461 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-17 01:19:00.015131 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-17 01:19:00.048086 | orchestrator | 2026-03-17 01:19:00.048129 | orchestrator | ## Containers @ testbed-node-1 2026-03-17 01:19:00.048135 | orchestrator | 2026-03-17 01:19:00.048139 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-17 01:19:00.048147 | orchestrator | + echo 2026-03-17 01:19:00.048152 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-03-17 01:19:00.048156 | orchestrator | + echo 2026-03-17 01:19:00.048160 | orchestrator | + osism container testbed-node-1 ps 2026-03-17 01:19:02.127227 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-17 01:19:02.127291 | orchestrator | 4f3e5a833c1a registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-03-17 01:19:02.127302 | orchestrator | 9bf81c5c1198 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-03-17 01:19:02.127308 | orchestrator | fa0b6617113d registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-03-17 01:19:02.127314 | orchestrator | 66b8e8ee5a89 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-03-17 01:19:02.127319 | orchestrator | 38c496d01ec6 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-03-17 01:19:02.127323 | orchestrator | 7dc6817889e1 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2026-03-17 01:19:02.127341 | orchestrator | c32b6175190a registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2026-03-17 01:19:02.127345 | orchestrator | b49334bf2fe4 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2026-03-17 01:19:02.127349 | orchestrator | 0a04da652a51 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-03-17 01:19:02.127361 | orchestrator | 2efff0f50298 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_scheduler 2026-03-17 01:19:02.127368 | orchestrator | b743199f2586 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_backup 2026-03-17 01:19:02.127378 | orchestrator | ac691df09383 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_volume 2026-03-17 01:19:02.127444 | orchestrator | d98e88fa95a3 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_scheduler 2026-03-17 01:19:02.127459 | orchestrator | 8999118a6c9b registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_api 2026-03-17 01:19:02.127463 | orchestrator | 60691809488f registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) glance_api 2026-03-17 01:19:02.127467 | orchestrator | 1af907a2dfb2 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_elasticsearch_exporter 2026-03-17 01:19:02.127473 | orchestrator | 5b2ac6569705 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_cadvisor 2026-03-17 01:19:02.127477 | orchestrator | 00fb6114defb registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 12 minutes ago Up 11 minutes prometheus_memcached_exporter 2026-03-17 01:19:02.127481 | orchestrator | 1126dcd89f24 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_mysqld_exporter 2026-03-17 01:19:02.127495 | orchestrator | 91269de253c6 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2026-03-17 01:19:02.127502 | orchestrator | 97c9c68820c4 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) magnum_conductor 2026-03-17 01:19:02.127508 | orchestrator | ad9aed6cd5bb registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2026-03-17 01:19:02.127514 | orchestrator | 8e6319a7af81 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) neutron_server 2026-03-17 01:19:02.127520 | orchestrator | a54300bf0846 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_worker 2026-03-17 01:19:02.127535 | orchestrator | d607dd21ba05 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) placement_api 2026-03-17 01:19:02.127542 | orchestrator | f04002f3e598 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_mdns 2026-03-17 01:19:02.127549 | orchestrator | ea6dd351db7a registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_producer 2026-03-17 01:19:02.127555 | orchestrator | a09f65eeb47a registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_central 2026-03-17 01:19:02.127561 | orchestrator | 696a08038e6d registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_api 2026-03-17 01:19:02.127567 | orchestrator | 90fd2c153b24 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) barbican_worker 2026-03-17 01:19:02.128124 | orchestrator | 3c978d452490 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_backend_bind9 2026-03-17 01:19:02.128151 | orchestrator | e3be8ffbf6bc registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_keystone_listener 2026-03-17 01:19:02.128159 | orchestrator | 5e40a10b5e69 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-1 2026-03-17 01:19:02.128165 | orchestrator | 7b7afa43f76b registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_api 2026-03-17 01:19:02.128173 | orchestrator | 7a103d867956 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2026-03-17 01:19:02.128183 | orchestrator | 472c47efb805 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2026-03-17 01:19:02.128196 | orchestrator | dd6edc503f5a registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2026-03-17 01:19:02.128202 | orchestrator | b939d3f2917d registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2026-03-17 01:19:02.128208 | orchestrator | b6797cb249cb registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2026-03-17 01:19:02.128215 | orchestrator | f77e54ff7e97 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2026-03-17 01:19:02.128221 | orchestrator | 649f912a32fe registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2026-03-17 01:19:02.128227 | orchestrator | 685ff91a5b3f registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-1 2026-03-17 01:19:02.128239 | orchestrator | c3162a6b6383 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2026-03-17 01:19:02.128246 | orchestrator | 615c9d3e7250 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2026-03-17 01:19:02.128252 | orchestrator | 95098df24aa5 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2026-03-17 01:19:02.128259 | orchestrator | a4ae69d52a64 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_northd 2026-03-17 01:19:02.128265 | orchestrator | 07f492f32bea registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_sb_db 2026-03-17 01:19:02.128270 | orchestrator | 505d96671a5c registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_nb_db 2026-03-17 01:19:02.128276 | orchestrator | b6e47df75447 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2026-03-17 01:19:02.128282 | orchestrator | 188bbbe5073b registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2026-03-17 01:19:02.128287 | orchestrator | b6c7adc72088 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-1 2026-03-17 01:19:02.128300 | orchestrator | c92e224bad60 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2026-03-17 01:19:02.128307 | orchestrator | 0391367e4375 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2026-03-17 01:19:02.128313 | orchestrator | 852e24fce2e8 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2026-03-17 01:19:02.128319 | orchestrator | 5ac8f58f5f9f registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis 2026-03-17 01:19:02.128324 | orchestrator | a405e1a3a422 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2026-03-17 01:19:02.128330 | orchestrator | 18834f47a792 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2026-03-17 01:19:02.128336 | orchestrator | eb3022b77f4e registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2026-03-17 01:19:02.128342 | orchestrator | 0bc9272d2726 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2026-03-17 01:19:02.326869 | orchestrator | 2026-03-17 01:19:02.326929 | orchestrator | ## Images @ testbed-node-1 2026-03-17 01:19:02.326937 | orchestrator | 2026-03-17 01:19:02.326943 | orchestrator | + echo 2026-03-17 01:19:02.326949 | orchestrator | + echo '## Images @ testbed-node-1' 2026-03-17 01:19:02.326956 | orchestrator | + echo 2026-03-17 01:19:02.326963 | orchestrator | + osism container testbed-node-1 images 2026-03-17 01:19:04.415502 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-17 01:19:04.415568 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 3 months ago 322MB 2026-03-17 01:19:04.415576 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 3 months ago 266MB 2026-03-17 01:19:04.415584 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 3 months ago 1.56GB 2026-03-17 01:19:04.415590 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 3 months ago 276MB 2026-03-17 01:19:04.415607 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 3 months ago 1.53GB 2026-03-17 01:19:04.415614 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-17 01:19:04.415621 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-17 01:19:04.415627 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 3 months ago 1.02GB 2026-03-17 01:19:04.415634 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 3 months ago 412MB 2026-03-17 01:19:04.415640 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 3 months ago 274MB 2026-03-17 01:19:04.415647 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-17 01:19:04.415654 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 3 months ago 273MB 2026-03-17 01:19:04.415660 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 3 months ago 273MB 2026-03-17 01:19:04.415669 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 3 months ago 452MB 2026-03-17 01:19:04.415676 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 3 months ago 1.15GB 2026-03-17 01:19:04.415682 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 3 months ago 301MB 2026-03-17 01:19:04.415689 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 3 months ago 298MB 2026-03-17 01:19:04.415696 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-17 01:19:04.415702 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 3 months ago 292MB 2026-03-17 01:19:04.415709 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-17 01:19:04.415716 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 3 months ago 279MB 2026-03-17 01:19:04.415722 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 3 months ago 279MB 2026-03-17 01:19:04.415729 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 3 months ago 975MB 2026-03-17 01:19:04.415735 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 3 months ago 1.37GB 2026-03-17 01:19:04.415741 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 3 months ago 1.21GB 2026-03-17 01:19:04.415760 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 3 months ago 1.21GB 2026-03-17 01:19:04.415767 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 3 months ago 1.21GB 2026-03-17 01:19:04.415774 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 3 months ago 1.13GB 2026-03-17 01:19:04.415780 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 3 months ago 1.24GB 2026-03-17 01:19:04.415787 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 3 months ago 991MB 2026-03-17 01:19:04.415803 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 3 months ago 991MB 2026-03-17 01:19:04.415809 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 3 months ago 990MB 2026-03-17 01:19:04.415815 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 3 months ago 1.09GB 2026-03-17 01:19:04.415822 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 3 months ago 1.04GB 2026-03-17 01:19:04.415829 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 3 months ago 1.04GB 2026-03-17 01:19:04.415835 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 3 months ago 1.03GB 2026-03-17 01:19:04.415842 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 3 months ago 1.03GB 2026-03-17 01:19:04.415848 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 3 months ago 1.05GB 2026-03-17 01:19:04.415855 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 3 months ago 1.03GB 2026-03-17 01:19:04.415862 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 3 months ago 1.05GB 2026-03-17 01:19:04.415869 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 3 months ago 1.16GB 2026-03-17 01:19:04.415875 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 3 months ago 1.1GB 2026-03-17 01:19:04.415882 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 3 months ago 983MB 2026-03-17 01:19:04.415888 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 3 months ago 989MB 2026-03-17 01:19:04.415894 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 3 months ago 984MB 2026-03-17 01:19:04.415901 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 3 months ago 984MB 2026-03-17 01:19:04.415907 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 3 months ago 989MB 2026-03-17 01:19:04.415913 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 3 months ago 984MB 2026-03-17 01:19:04.415920 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 3 months ago 1.72GB 2026-03-17 01:19:04.415930 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 3 months ago 1.4GB 2026-03-17 01:19:04.415937 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 3 months ago 1.41GB 2026-03-17 01:19:04.415943 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 3 months ago 1.4GB 2026-03-17 01:19:04.415955 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 3 months ago 840MB 2026-03-17 01:19:04.415962 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 3 months ago 840MB 2026-03-17 01:19:04.415968 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 3 months ago 840MB 2026-03-17 01:19:04.415974 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 3 months ago 840MB 2026-03-17 01:19:04.415981 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-17 01:19:04.699074 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-17 01:19:04.699166 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-17 01:19:04.752138 | orchestrator | 2026-03-17 01:19:04.752228 | orchestrator | ## Containers @ testbed-node-2 2026-03-17 01:19:04.752238 | orchestrator | 2026-03-17 01:19:04.752243 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-17 01:19:04.752250 | orchestrator | + echo 2026-03-17 01:19:04.752258 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-03-17 01:19:04.752266 | orchestrator | + echo 2026-03-17 01:19:04.752272 | orchestrator | + osism container testbed-node-2 ps 2026-03-17 01:19:07.034417 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-17 01:19:07.034477 | orchestrator | 4f8810b0c71a registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-03-17 01:19:07.034486 | orchestrator | ab9d455f7170 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-03-17 01:19:07.034493 | orchestrator | b9ad80efbaea registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-03-17 01:19:07.034500 | orchestrator | b50945200295 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-03-17 01:19:07.034507 | orchestrator | 3f9c5ab7d480 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-03-17 01:19:07.034513 | orchestrator | 627ab73dc51a registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2026-03-17 01:19:07.034520 | orchestrator | 309241a46be5 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2026-03-17 01:19:07.034527 | orchestrator | 78cf2399399f registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2026-03-17 01:19:07.034533 | orchestrator | be8991fe55a5 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 9 minutes ago Up 8 minutes grafana 2026-03-17 01:19:07.034540 | orchestrator | 6f858167c5fb registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 9 minutes ago Up 8 minutes (healthy) nova_scheduler 2026-03-17 01:19:07.034547 | orchestrator | 3e238e019586 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_backup 2026-03-17 01:19:07.034568 | orchestrator | 2a6220ffeb60 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_volume 2026-03-17 01:19:07.034575 | orchestrator | 2f428d899dcd registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_scheduler 2026-03-17 01:19:07.034581 | orchestrator | 6bf51ee4b96c registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) glance_api 2026-03-17 01:19:07.034587 | orchestrator | 88cbe5025036 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 11 minutes ago Up 10 minutes (healthy) cinder_api 2026-03-17 01:19:07.034593 | orchestrator | 20a80b5132c9 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_elasticsearch_exporter 2026-03-17 01:19:07.034601 | orchestrator | db7046e8590e registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_cadvisor 2026-03-17 01:19:07.034608 | orchestrator | e0ace1b778f0 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_memcached_exporter 2026-03-17 01:19:07.034615 | orchestrator | 98650f889036 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_mysqld_exporter 2026-03-17 01:19:07.034632 | orchestrator | 4e221d0b9b71 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2026-03-17 01:19:07.034639 | orchestrator | 51fdd3725f0d registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) magnum_conductor 2026-03-17 01:19:07.034645 | orchestrator | 8f84d00697a9 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2026-03-17 01:19:07.034652 | orchestrator | eacb603a46f7 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) neutron_server 2026-03-17 01:19:07.034659 | orchestrator | 4eb384fd6ebb registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_worker 2026-03-17 01:19:07.034665 | orchestrator | a325dd2897ea registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) placement_api 2026-03-17 01:19:07.034681 | orchestrator | 1570deb775eb registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_mdns 2026-03-17 01:19:07.034688 | orchestrator | 4c5b5c01c619 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_producer 2026-03-17 01:19:07.034694 | orchestrator | 5c40a648e90f registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_central 2026-03-17 01:19:07.034701 | orchestrator | 3e4e499faf00 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_api 2026-03-17 01:19:07.034712 | orchestrator | 51fcedf0fc5e registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) barbican_worker 2026-03-17 01:19:07.034718 | orchestrator | 831901a13c0b registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 14 minutes (healthy) designate_backend_bind9 2026-03-17 01:19:07.034727 | orchestrator | a92f30bb236f registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-2 2026-03-17 01:19:07.034734 | orchestrator | 34fe90555518 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_keystone_listener 2026-03-17 01:19:07.034740 | orchestrator | 3a76138ae649 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_api 2026-03-17 01:19:07.034746 | orchestrator | 2c2928e8647b registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2026-03-17 01:19:07.034752 | orchestrator | 09c9aaf4ed56 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2026-03-17 01:19:07.034759 | orchestrator | 8ae976031a6a registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2026-03-17 01:19:07.034765 | orchestrator | fb32616117eb registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2026-03-17 01:19:07.034771 | orchestrator | 5777a3e0363c registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2026-03-17 01:19:07.034777 | orchestrator | a66c7f867b69 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2026-03-17 01:19:07.034788 | orchestrator | f1df6dbb0ee0 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2026-03-17 01:19:07.034794 | orchestrator | 3f593578f510 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-2 2026-03-17 01:19:07.034799 | orchestrator | 1e308ba846b1 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2026-03-17 01:19:07.034805 | orchestrator | dbf0e3dd1f00 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2026-03-17 01:19:07.034811 | orchestrator | a3262276bb9a registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2026-03-17 01:19:07.034817 | orchestrator | fbd1af19dfcf registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_northd 2026-03-17 01:19:07.034824 | orchestrator | 798f1d989f77 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_sb_db 2026-03-17 01:19:07.034830 | orchestrator | 7b35a9769a22 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_nb_db 2026-03-17 01:19:07.034841 | orchestrator | 80a3cf956aca registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2026-03-17 01:19:07.034848 | orchestrator | e275177f24db registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2026-03-17 01:19:07.034854 | orchestrator | 2132e31f4908 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-2 2026-03-17 01:19:07.034860 | orchestrator | 83879d8926c0 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2026-03-17 01:19:07.034866 | orchestrator | 1aa50ac9af4e registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2026-03-17 01:19:07.034872 | orchestrator | 90d0e3059d8b registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2026-03-17 01:19:07.034879 | orchestrator | 8d644467164a registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis 2026-03-17 01:19:07.034886 | orchestrator | 801dc96d150a registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2026-03-17 01:19:07.034892 | orchestrator | 943639d3f7cd registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2026-03-17 01:19:07.034899 | orchestrator | 7ba6e9b361ce registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2026-03-17 01:19:07.034906 | orchestrator | a452ce1b3a17 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2026-03-17 01:19:07.288967 | orchestrator | 2026-03-17 01:19:07.289017 | orchestrator | ## Images @ testbed-node-2 2026-03-17 01:19:07.289023 | orchestrator | 2026-03-17 01:19:07.289027 | orchestrator | + echo 2026-03-17 01:19:07.289032 | orchestrator | + echo '## Images @ testbed-node-2' 2026-03-17 01:19:07.289036 | orchestrator | + echo 2026-03-17 01:19:07.289040 | orchestrator | + osism container testbed-node-2 images 2026-03-17 01:19:09.562718 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-17 01:19:09.562775 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 3 months ago 322MB 2026-03-17 01:19:09.562782 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 3 months ago 266MB 2026-03-17 01:19:09.562788 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 3 months ago 1.56GB 2026-03-17 01:19:09.562794 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 3 months ago 1.53GB 2026-03-17 01:19:09.562810 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 3 months ago 276MB 2026-03-17 01:19:09.562816 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-17 01:19:09.562821 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-17 01:19:09.562835 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 3 months ago 1.02GB 2026-03-17 01:19:09.562841 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 3 months ago 412MB 2026-03-17 01:19:09.562846 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 3 months ago 274MB 2026-03-17 01:19:09.562851 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-17 01:19:09.562857 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 3 months ago 273MB 2026-03-17 01:19:09.562862 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 3 months ago 273MB 2026-03-17 01:19:09.562866 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 3 months ago 452MB 2026-03-17 01:19:09.562871 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 3 months ago 1.15GB 2026-03-17 01:19:09.562876 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 3 months ago 301MB 2026-03-17 01:19:09.562881 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 3 months ago 298MB 2026-03-17 01:19:09.562884 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-17 01:19:09.562887 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 3 months ago 292MB 2026-03-17 01:19:09.562890 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-17 01:19:09.562893 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 3 months ago 279MB 2026-03-17 01:19:09.562899 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 3 months ago 975MB 2026-03-17 01:19:09.562902 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 3 months ago 279MB 2026-03-17 01:19:09.562906 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 3 months ago 1.37GB 2026-03-17 01:19:09.562909 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 3 months ago 1.21GB 2026-03-17 01:19:09.562912 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 3 months ago 1.21GB 2026-03-17 01:19:09.562915 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 3 months ago 1.21GB 2026-03-17 01:19:09.562918 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 3 months ago 1.13GB 2026-03-17 01:19:09.562921 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 3 months ago 1.24GB 2026-03-17 01:19:09.562926 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 3 months ago 991MB 2026-03-17 01:19:09.562931 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 3 months ago 991MB 2026-03-17 01:19:09.562949 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 3 months ago 990MB 2026-03-17 01:19:09.562955 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 3 months ago 1.09GB 2026-03-17 01:19:09.562964 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 3 months ago 1.04GB 2026-03-17 01:19:09.562969 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 3 months ago 1.04GB 2026-03-17 01:19:09.562974 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 3 months ago 1.03GB 2026-03-17 01:19:09.562979 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 3 months ago 1.03GB 2026-03-17 01:19:09.562984 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 3 months ago 1.05GB 2026-03-17 01:19:09.562988 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 3 months ago 1.03GB 2026-03-17 01:19:09.562992 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 3 months ago 1.05GB 2026-03-17 01:19:09.562997 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 3 months ago 1.16GB 2026-03-17 01:19:09.563001 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 3 months ago 1.1GB 2026-03-17 01:19:09.563006 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 3 months ago 983MB 2026-03-17 01:19:09.563011 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 3 months ago 989MB 2026-03-17 01:19:09.563016 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 3 months ago 984MB 2026-03-17 01:19:09.563020 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 3 months ago 984MB 2026-03-17 01:19:09.563025 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 3 months ago 989MB 2026-03-17 01:19:09.563030 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 3 months ago 984MB 2026-03-17 01:19:09.563035 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 3 months ago 1.72GB 2026-03-17 01:19:09.563040 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 3 months ago 1.4GB 2026-03-17 01:19:09.563045 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 3 months ago 1.41GB 2026-03-17 01:19:09.563050 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 3 months ago 1.4GB 2026-03-17 01:19:09.563055 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 3 months ago 840MB 2026-03-17 01:19:09.563060 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 3 months ago 840MB 2026-03-17 01:19:09.563067 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 3 months ago 840MB 2026-03-17 01:19:09.563073 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 3 months ago 840MB 2026-03-17 01:19:09.563078 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-17 01:19:09.857692 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-03-17 01:19:09.863173 | orchestrator | + set -e 2026-03-17 01:19:09.863213 | orchestrator | + source /opt/manager-vars.sh 2026-03-17 01:19:09.863935 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-17 01:19:09.863968 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-17 01:19:09.863991 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-17 01:19:09.864006 | orchestrator | ++ CEPH_VERSION=reef 2026-03-17 01:19:09.864015 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-17 01:19:09.864023 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-17 01:19:09.864030 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-17 01:19:09.864038 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-17 01:19:09.864043 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-17 01:19:09.864047 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-17 01:19:09.864051 | orchestrator | ++ export ARA=false 2026-03-17 01:19:09.864056 | orchestrator | ++ ARA=false 2026-03-17 01:19:09.864061 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-17 01:19:09.864067 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-17 01:19:09.864073 | orchestrator | ++ export TEMPEST=true 2026-03-17 01:19:09.864079 | orchestrator | ++ TEMPEST=true 2026-03-17 01:19:09.864086 | orchestrator | ++ export IS_ZUUL=true 2026-03-17 01:19:09.864093 | orchestrator | ++ IS_ZUUL=true 2026-03-17 01:19:09.864100 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.64 2026-03-17 01:19:09.864107 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.64 2026-03-17 01:19:09.864112 | orchestrator | ++ export EXTERNAL_API=false 2026-03-17 01:19:09.864116 | orchestrator | ++ EXTERNAL_API=false 2026-03-17 01:19:09.864121 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-17 01:19:09.864125 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-17 01:19:09.864130 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-17 01:19:09.864134 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-17 01:19:09.864139 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-17 01:19:09.864143 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-17 01:19:09.864148 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-17 01:19:09.864152 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-03-17 01:19:09.873143 | orchestrator | + set -e 2026-03-17 01:19:09.873191 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-17 01:19:09.873197 | orchestrator | ++ export INTERACTIVE=false 2026-03-17 01:19:09.873202 | orchestrator | ++ INTERACTIVE=false 2026-03-17 01:19:09.873206 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-17 01:19:09.873210 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-17 01:19:09.873214 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-17 01:19:09.873982 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-17 01:19:09.876859 | orchestrator | 2026-03-17 01:19:09.876912 | orchestrator | # Ceph status 2026-03-17 01:19:09.876919 | orchestrator | 2026-03-17 01:19:09.876925 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-17 01:19:09.876931 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-17 01:19:09.876937 | orchestrator | + echo 2026-03-17 01:19:09.876942 | orchestrator | + echo '# Ceph status' 2026-03-17 01:19:09.876953 | orchestrator | + echo 2026-03-17 01:19:09.876958 | orchestrator | + ceph -s 2026-03-17 01:19:10.439864 | orchestrator | cluster: 2026-03-17 01:19:10.439937 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-03-17 01:19:10.439947 | orchestrator | health: HEALTH_OK 2026-03-17 01:19:10.439954 | orchestrator | 2026-03-17 01:19:10.439958 | orchestrator | services: 2026-03-17 01:19:10.439962 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 26m) 2026-03-17 01:19:10.439969 | orchestrator | mgr: testbed-node-2(active, since 14m), standbys: testbed-node-1, testbed-node-0 2026-03-17 01:19:10.439974 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-03-17 01:19:10.439980 | orchestrator | osd: 6 osds: 6 up (since 23m), 6 in (since 23m) 2026-03-17 01:19:10.439986 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-03-17 01:19:10.439994 | orchestrator | 2026-03-17 01:19:10.439999 | orchestrator | data: 2026-03-17 01:19:10.440004 | orchestrator | volumes: 1/1 healthy 2026-03-17 01:19:10.440009 | orchestrator | pools: 14 pools, 401 pgs 2026-03-17 01:19:10.440014 | orchestrator | objects: 556 objects, 2.2 GiB 2026-03-17 01:19:10.440019 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-03-17 01:19:10.440024 | orchestrator | pgs: 401 active+clean 2026-03-17 01:19:10.440029 | orchestrator | 2026-03-17 01:19:10.480690 | orchestrator | 2026-03-17 01:19:10.480760 | orchestrator | # Ceph versions 2026-03-17 01:19:10.480769 | orchestrator | 2026-03-17 01:19:10.480777 | orchestrator | + echo 2026-03-17 01:19:10.480786 | orchestrator | + echo '# Ceph versions' 2026-03-17 01:19:10.480794 | orchestrator | + echo 2026-03-17 01:19:10.480802 | orchestrator | + ceph versions 2026-03-17 01:19:11.022436 | orchestrator | { 2026-03-17 01:19:11.022495 | orchestrator | "mon": { 2026-03-17 01:19:11.022501 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-17 01:19:11.022514 | orchestrator | }, 2026-03-17 01:19:11.022517 | orchestrator | "mgr": { 2026-03-17 01:19:11.022521 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-17 01:19:11.022524 | orchestrator | }, 2026-03-17 01:19:11.022527 | orchestrator | "osd": { 2026-03-17 01:19:11.022530 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-03-17 01:19:11.022533 | orchestrator | }, 2026-03-17 01:19:11.022536 | orchestrator | "mds": { 2026-03-17 01:19:11.022540 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-17 01:19:11.022543 | orchestrator | }, 2026-03-17 01:19:11.022546 | orchestrator | "rgw": { 2026-03-17 01:19:11.022549 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-17 01:19:11.022552 | orchestrator | }, 2026-03-17 01:19:11.022555 | orchestrator | "overall": { 2026-03-17 01:19:11.022559 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-03-17 01:19:11.022562 | orchestrator | } 2026-03-17 01:19:11.022565 | orchestrator | } 2026-03-17 01:19:11.064668 | orchestrator | 2026-03-17 01:19:11.064717 | orchestrator | # Ceph OSD tree 2026-03-17 01:19:11.064724 | orchestrator | 2026-03-17 01:19:11.064729 | orchestrator | + echo 2026-03-17 01:19:11.064735 | orchestrator | + echo '# Ceph OSD tree' 2026-03-17 01:19:11.064741 | orchestrator | + echo 2026-03-17 01:19:11.064746 | orchestrator | + ceph osd df tree 2026-03-17 01:19:11.562267 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-03-17 01:19:11.562335 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 400 MiB 113 GiB 5.89 1.00 - root default 2026-03-17 01:19:11.562343 | orchestrator | -3 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 121 MiB 38 GiB 5.86 0.99 - host testbed-node-3 2026-03-17 01:19:11.562348 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.5 GiB 1 KiB 52 MiB 18 GiB 7.57 1.29 200 up osd.0 2026-03-17 01:19:11.562352 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 847 MiB 777 MiB 1 KiB 70 MiB 19 GiB 4.14 0.70 190 up osd.4 2026-03-17 01:19:11.562355 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.90 1.00 - host testbed-node-4 2026-03-17 01:19:11.562369 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 887 MiB 818 MiB 1 KiB 70 MiB 19 GiB 4.34 0.74 176 up osd.1 2026-03-17 01:19:11.562413 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 70 MiB 18 GiB 7.46 1.27 216 up osd.3 2026-03-17 01:19:11.562418 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.90 1.00 - host testbed-node-5 2026-03-17 01:19:11.562422 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.68 1.14 191 up osd.2 2026-03-17 01:19:11.562426 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.0 GiB 978 MiB 1 KiB 70 MiB 19 GiB 5.12 0.87 197 up osd.5 2026-03-17 01:19:11.562430 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 400 MiB 113 GiB 5.89 2026-03-17 01:19:11.562434 | orchestrator | MIN/MAX VAR: 0.70/1.29 STDDEV: 1.41 2026-03-17 01:19:11.616546 | orchestrator | 2026-03-17 01:19:11.616610 | orchestrator | # Ceph monitor status 2026-03-17 01:19:11.616622 | orchestrator | 2026-03-17 01:19:11.616626 | orchestrator | + echo 2026-03-17 01:19:11.616631 | orchestrator | + echo '# Ceph monitor status' 2026-03-17 01:19:11.616635 | orchestrator | + echo 2026-03-17 01:19:11.616639 | orchestrator | + ceph mon stat 2026-03-17 01:19:12.157843 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 4, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-03-17 01:19:12.201843 | orchestrator | 2026-03-17 01:19:12.201913 | orchestrator | # Ceph quorum status 2026-03-17 01:19:12.201922 | orchestrator | 2026-03-17 01:19:12.201928 | orchestrator | + echo 2026-03-17 01:19:12.201934 | orchestrator | + echo '# Ceph quorum status' 2026-03-17 01:19:12.201939 | orchestrator | + echo 2026-03-17 01:19:12.201944 | orchestrator | + ceph quorum_status 2026-03-17 01:19:12.201949 | orchestrator | + jq 2026-03-17 01:19:12.828493 | orchestrator | { 2026-03-17 01:19:12.828550 | orchestrator | "election_epoch": 4, 2026-03-17 01:19:12.828558 | orchestrator | "quorum": [ 2026-03-17 01:19:12.828564 | orchestrator | 0, 2026-03-17 01:19:12.828570 | orchestrator | 1, 2026-03-17 01:19:12.828575 | orchestrator | 2 2026-03-17 01:19:12.828581 | orchestrator | ], 2026-03-17 01:19:12.828587 | orchestrator | "quorum_names": [ 2026-03-17 01:19:12.828592 | orchestrator | "testbed-node-0", 2026-03-17 01:19:12.828598 | orchestrator | "testbed-node-1", 2026-03-17 01:19:12.828603 | orchestrator | "testbed-node-2" 2026-03-17 01:19:12.828609 | orchestrator | ], 2026-03-17 01:19:12.828615 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-03-17 01:19:12.828621 | orchestrator | "quorum_age": 1602, 2026-03-17 01:19:12.828627 | orchestrator | "features": { 2026-03-17 01:19:12.828630 | orchestrator | "quorum_con": "4540138322906710015", 2026-03-17 01:19:12.828633 | orchestrator | "quorum_mon": [ 2026-03-17 01:19:12.828636 | orchestrator | "kraken", 2026-03-17 01:19:12.828639 | orchestrator | "luminous", 2026-03-17 01:19:12.828643 | orchestrator | "mimic", 2026-03-17 01:19:12.828646 | orchestrator | "osdmap-prune", 2026-03-17 01:19:12.828651 | orchestrator | "nautilus", 2026-03-17 01:19:12.828656 | orchestrator | "octopus", 2026-03-17 01:19:12.828661 | orchestrator | "pacific", 2026-03-17 01:19:12.828666 | orchestrator | "elector-pinging", 2026-03-17 01:19:12.828672 | orchestrator | "quincy", 2026-03-17 01:19:12.828677 | orchestrator | "reef" 2026-03-17 01:19:12.828682 | orchestrator | ] 2026-03-17 01:19:12.828687 | orchestrator | }, 2026-03-17 01:19:12.828692 | orchestrator | "monmap": { 2026-03-17 01:19:12.828695 | orchestrator | "epoch": 1, 2026-03-17 01:19:12.828698 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-03-17 01:19:12.828702 | orchestrator | "modified": "2026-03-17T00:52:17.794966Z", 2026-03-17 01:19:12.828705 | orchestrator | "created": "2026-03-17T00:52:17.794966Z", 2026-03-17 01:19:12.828708 | orchestrator | "min_mon_release": 18, 2026-03-17 01:19:12.828712 | orchestrator | "min_mon_release_name": "reef", 2026-03-17 01:19:12.828716 | orchestrator | "election_strategy": 1, 2026-03-17 01:19:12.828721 | orchestrator | "disallowed_leaders: ": "", 2026-03-17 01:19:12.828726 | orchestrator | "stretch_mode": false, 2026-03-17 01:19:12.828732 | orchestrator | "tiebreaker_mon": "", 2026-03-17 01:19:12.828736 | orchestrator | "removed_ranks: ": "", 2026-03-17 01:19:12.828741 | orchestrator | "features": { 2026-03-17 01:19:12.828746 | orchestrator | "persistent": [ 2026-03-17 01:19:12.828751 | orchestrator | "kraken", 2026-03-17 01:19:12.828757 | orchestrator | "luminous", 2026-03-17 01:19:12.828764 | orchestrator | "mimic", 2026-03-17 01:19:12.828770 | orchestrator | "osdmap-prune", 2026-03-17 01:19:12.828775 | orchestrator | "nautilus", 2026-03-17 01:19:12.828779 | orchestrator | "octopus", 2026-03-17 01:19:12.828784 | orchestrator | "pacific", 2026-03-17 01:19:12.828788 | orchestrator | "elector-pinging", 2026-03-17 01:19:12.828792 | orchestrator | "quincy", 2026-03-17 01:19:12.828797 | orchestrator | "reef" 2026-03-17 01:19:12.828801 | orchestrator | ], 2026-03-17 01:19:12.828806 | orchestrator | "optional": [] 2026-03-17 01:19:12.828811 | orchestrator | }, 2026-03-17 01:19:12.828815 | orchestrator | "mons": [ 2026-03-17 01:19:12.828820 | orchestrator | { 2026-03-17 01:19:12.828825 | orchestrator | "rank": 0, 2026-03-17 01:19:12.828830 | orchestrator | "name": "testbed-node-0", 2026-03-17 01:19:12.828835 | orchestrator | "public_addrs": { 2026-03-17 01:19:12.829504 | orchestrator | "addrvec": [ 2026-03-17 01:19:12.829530 | orchestrator | { 2026-03-17 01:19:12.829536 | orchestrator | "type": "v2", 2026-03-17 01:19:12.829542 | orchestrator | "addr": "192.168.16.10:3300", 2026-03-17 01:19:12.829549 | orchestrator | "nonce": 0 2026-03-17 01:19:12.829554 | orchestrator | }, 2026-03-17 01:19:12.829560 | orchestrator | { 2026-03-17 01:19:12.829566 | orchestrator | "type": "v1", 2026-03-17 01:19:12.829571 | orchestrator | "addr": "192.168.16.10:6789", 2026-03-17 01:19:12.829578 | orchestrator | "nonce": 0 2026-03-17 01:19:12.829584 | orchestrator | } 2026-03-17 01:19:12.829590 | orchestrator | ] 2026-03-17 01:19:12.829596 | orchestrator | }, 2026-03-17 01:19:12.829602 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-03-17 01:19:12.829608 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-03-17 01:19:12.829631 | orchestrator | "priority": 0, 2026-03-17 01:19:12.829638 | orchestrator | "weight": 0, 2026-03-17 01:19:12.829645 | orchestrator | "crush_location": "{}" 2026-03-17 01:19:12.829651 | orchestrator | }, 2026-03-17 01:19:12.829657 | orchestrator | { 2026-03-17 01:19:12.829663 | orchestrator | "rank": 1, 2026-03-17 01:19:12.829669 | orchestrator | "name": "testbed-node-1", 2026-03-17 01:19:12.829675 | orchestrator | "public_addrs": { 2026-03-17 01:19:12.829681 | orchestrator | "addrvec": [ 2026-03-17 01:19:12.829687 | orchestrator | { 2026-03-17 01:19:12.829692 | orchestrator | "type": "v2", 2026-03-17 01:19:12.829698 | orchestrator | "addr": "192.168.16.11:3300", 2026-03-17 01:19:12.829705 | orchestrator | "nonce": 0 2026-03-17 01:19:12.829711 | orchestrator | }, 2026-03-17 01:19:12.829717 | orchestrator | { 2026-03-17 01:19:12.829724 | orchestrator | "type": "v1", 2026-03-17 01:19:12.829730 | orchestrator | "addr": "192.168.16.11:6789", 2026-03-17 01:19:12.829736 | orchestrator | "nonce": 0 2026-03-17 01:19:12.829742 | orchestrator | } 2026-03-17 01:19:12.829748 | orchestrator | ] 2026-03-17 01:19:12.829754 | orchestrator | }, 2026-03-17 01:19:12.829760 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-03-17 01:19:12.829767 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-03-17 01:19:12.829773 | orchestrator | "priority": 0, 2026-03-17 01:19:12.829779 | orchestrator | "weight": 0, 2026-03-17 01:19:12.829786 | orchestrator | "crush_location": "{}" 2026-03-17 01:19:12.829792 | orchestrator | }, 2026-03-17 01:19:12.829797 | orchestrator | { 2026-03-17 01:19:12.829804 | orchestrator | "rank": 2, 2026-03-17 01:19:12.829809 | orchestrator | "name": "testbed-node-2", 2026-03-17 01:19:12.829815 | orchestrator | "public_addrs": { 2026-03-17 01:19:12.829821 | orchestrator | "addrvec": [ 2026-03-17 01:19:12.829827 | orchestrator | { 2026-03-17 01:19:12.829833 | orchestrator | "type": "v2", 2026-03-17 01:19:12.829839 | orchestrator | "addr": "192.168.16.12:3300", 2026-03-17 01:19:12.829845 | orchestrator | "nonce": 0 2026-03-17 01:19:12.829851 | orchestrator | }, 2026-03-17 01:19:12.829857 | orchestrator | { 2026-03-17 01:19:12.829862 | orchestrator | "type": "v1", 2026-03-17 01:19:12.829868 | orchestrator | "addr": "192.168.16.12:6789", 2026-03-17 01:19:12.829875 | orchestrator | "nonce": 0 2026-03-17 01:19:12.829880 | orchestrator | } 2026-03-17 01:19:12.829886 | orchestrator | ] 2026-03-17 01:19:12.829892 | orchestrator | }, 2026-03-17 01:19:12.829898 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-03-17 01:19:12.829904 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-03-17 01:19:12.829911 | orchestrator | "priority": 0, 2026-03-17 01:19:12.829917 | orchestrator | "weight": 0, 2026-03-17 01:19:12.829922 | orchestrator | "crush_location": "{}" 2026-03-17 01:19:12.829928 | orchestrator | } 2026-03-17 01:19:12.829934 | orchestrator | ] 2026-03-17 01:19:12.829940 | orchestrator | } 2026-03-17 01:19:12.829947 | orchestrator | } 2026-03-17 01:19:12.829963 | orchestrator | 2026-03-17 01:19:12.829970 | orchestrator | # Ceph free space status 2026-03-17 01:19:12.829976 | orchestrator | 2026-03-17 01:19:12.829982 | orchestrator | + echo 2026-03-17 01:19:12.829988 | orchestrator | + echo '# Ceph free space status' 2026-03-17 01:19:12.829994 | orchestrator | + echo 2026-03-17 01:19:12.830000 | orchestrator | + ceph df 2026-03-17 01:19:13.393537 | orchestrator | --- RAW STORAGE --- 2026-03-17 01:19:13.393585 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-03-17 01:19:13.393595 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.89 2026-03-17 01:19:13.393598 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.89 2026-03-17 01:19:13.393607 | orchestrator | 2026-03-17 01:19:13.393613 | orchestrator | --- POOLS --- 2026-03-17 01:19:13.393619 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-03-17 01:19:13.393628 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2026-03-17 01:19:13.393633 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-03-17 01:19:13.393639 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-03-17 01:19:13.393644 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-03-17 01:19:13.393649 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-03-17 01:19:13.393669 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-03-17 01:19:13.393675 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-03-17 01:19:13.393681 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-03-17 01:19:13.393686 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 52 GiB 2026-03-17 01:19:13.393692 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-03-17 01:19:13.393697 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-03-17 01:19:13.393702 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.97 35 GiB 2026-03-17 01:19:13.393707 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-03-17 01:19:13.393713 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-03-17 01:19:13.438827 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-17 01:19:13.486696 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-17 01:19:13.486743 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2026-03-17 01:19:13.486748 | orchestrator | + osism apply facts 2026-03-17 01:19:25.484233 | orchestrator | 2026-03-17 01:19:25 | INFO  | Task 52ba5b28-c5d7-4ccf-abb5-83e312e84347 (facts) was prepared for execution. 2026-03-17 01:19:25.484303 | orchestrator | 2026-03-17 01:19:25 | INFO  | It takes a moment until task 52ba5b28-c5d7-4ccf-abb5-83e312e84347 (facts) has been started and output is visible here. 2026-03-17 01:19:38.972924 | orchestrator | 2026-03-17 01:19:38.973008 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-17 01:19:38.973021 | orchestrator | 2026-03-17 01:19:38.973031 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-17 01:19:38.973040 | orchestrator | Tuesday 17 March 2026 01:19:30 +0000 (0:00:00.306) 0:00:00.306 ********* 2026-03-17 01:19:38.973048 | orchestrator | ok: [testbed-manager] 2026-03-17 01:19:38.973059 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:19:38.973067 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:19:38.973076 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:19:38.973084 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:19:38.973091 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:19:38.973099 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:19:38.973107 | orchestrator | 2026-03-17 01:19:38.973114 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-17 01:19:38.973123 | orchestrator | Tuesday 17 March 2026 01:19:31 +0000 (0:00:01.260) 0:00:01.567 ********* 2026-03-17 01:19:38.973133 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:19:38.973142 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:38.973151 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:38.973160 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:38.973169 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:19:38.973178 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:19:38.973207 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:19:38.973213 | orchestrator | 2026-03-17 01:19:38.973219 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-17 01:19:38.973225 | orchestrator | 2026-03-17 01:19:38.973230 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-17 01:19:38.973235 | orchestrator | Tuesday 17 March 2026 01:19:32 +0000 (0:00:01.453) 0:00:03.020 ********* 2026-03-17 01:19:38.973241 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:19:38.973246 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:19:38.973251 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:19:38.973256 | orchestrator | ok: [testbed-manager] 2026-03-17 01:19:38.973261 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:19:38.973267 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:19:38.973271 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:19:38.973276 | orchestrator | 2026-03-17 01:19:38.973282 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-17 01:19:38.973287 | orchestrator | 2026-03-17 01:19:38.973292 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-17 01:19:38.973312 | orchestrator | Tuesday 17 March 2026 01:19:38 +0000 (0:00:05.352) 0:00:08.373 ********* 2026-03-17 01:19:38.973318 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:19:38.973323 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:38.973328 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:38.973333 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:38.973338 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:19:38.973343 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:19:38.973348 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:19:38.973353 | orchestrator | 2026-03-17 01:19:38.973358 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:19:38.973364 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 01:19:38.973374 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 01:19:38.973379 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 01:19:38.973424 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 01:19:38.973439 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 01:19:38.973444 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 01:19:38.973449 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 01:19:38.973454 | orchestrator | 2026-03-17 01:19:38.973459 | orchestrator | 2026-03-17 01:19:38.973465 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:19:38.973470 | orchestrator | Tuesday 17 March 2026 01:19:38 +0000 (0:00:00.494) 0:00:08.868 ********* 2026-03-17 01:19:38.973475 | orchestrator | =============================================================================== 2026-03-17 01:19:38.973480 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.35s 2026-03-17 01:19:38.973485 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.45s 2026-03-17 01:19:38.973491 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.26s 2026-03-17 01:19:38.973497 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.49s 2026-03-17 01:19:39.171018 | orchestrator | + osism validate ceph-mons 2026-03-17 01:20:10.687580 | orchestrator | 2026-03-17 01:20:10.687696 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-03-17 01:20:10.687708 | orchestrator | 2026-03-17 01:20:10.687715 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-17 01:20:10.687723 | orchestrator | Tuesday 17 March 2026 01:19:55 +0000 (0:00:00.455) 0:00:00.455 ********* 2026-03-17 01:20:10.687731 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-17 01:20:10.687738 | orchestrator | 2026-03-17 01:20:10.687745 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-17 01:20:10.687799 | orchestrator | Tuesday 17 March 2026 01:19:56 +0000 (0:00:00.787) 0:00:01.243 ********* 2026-03-17 01:20:10.687808 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-17 01:20:10.687816 | orchestrator | 2026-03-17 01:20:10.687823 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-17 01:20:10.687830 | orchestrator | Tuesday 17 March 2026 01:19:57 +0000 (0:00:00.927) 0:00:02.171 ********* 2026-03-17 01:20:10.687838 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:20:10.687897 | orchestrator | 2026-03-17 01:20:10.687905 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-17 01:20:10.687912 | orchestrator | Tuesday 17 March 2026 01:19:57 +0000 (0:00:00.118) 0:00:02.289 ********* 2026-03-17 01:20:10.687918 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:20:10.687924 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:20:10.687930 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:20:10.687936 | orchestrator | 2026-03-17 01:20:10.687942 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-17 01:20:10.687949 | orchestrator | Tuesday 17 March 2026 01:19:57 +0000 (0:00:00.271) 0:00:02.561 ********* 2026-03-17 01:20:10.687955 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:20:10.687959 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:20:10.687963 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:20:10.687967 | orchestrator | 2026-03-17 01:20:10.687971 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-17 01:20:10.687975 | orchestrator | Tuesday 17 March 2026 01:19:58 +0000 (0:00:01.081) 0:00:03.643 ********* 2026-03-17 01:20:10.687978 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:20:10.687983 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:20:10.687987 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:20:10.687990 | orchestrator | 2026-03-17 01:20:10.687994 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-17 01:20:10.687998 | orchestrator | Tuesday 17 March 2026 01:19:58 +0000 (0:00:00.287) 0:00:03.930 ********* 2026-03-17 01:20:10.688002 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:20:10.688006 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:20:10.688009 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:20:10.688013 | orchestrator | 2026-03-17 01:20:10.688017 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-17 01:20:10.688021 | orchestrator | Tuesday 17 March 2026 01:19:59 +0000 (0:00:00.464) 0:00:04.395 ********* 2026-03-17 01:20:10.688025 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:20:10.688029 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:20:10.688032 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:20:10.688036 | orchestrator | 2026-03-17 01:20:10.688040 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-03-17 01:20:10.688044 | orchestrator | Tuesday 17 March 2026 01:19:59 +0000 (0:00:00.287) 0:00:04.682 ********* 2026-03-17 01:20:10.688047 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:20:10.688051 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:20:10.688055 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:20:10.688059 | orchestrator | 2026-03-17 01:20:10.688063 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-03-17 01:20:10.688066 | orchestrator | Tuesday 17 March 2026 01:19:59 +0000 (0:00:00.288) 0:00:04.971 ********* 2026-03-17 01:20:10.688070 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:20:10.688074 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:20:10.688077 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:20:10.688081 | orchestrator | 2026-03-17 01:20:10.688096 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-17 01:20:10.688101 | orchestrator | Tuesday 17 March 2026 01:20:00 +0000 (0:00:00.496) 0:00:05.468 ********* 2026-03-17 01:20:10.688105 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:20:10.688109 | orchestrator | 2026-03-17 01:20:10.688114 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-17 01:20:10.688118 | orchestrator | Tuesday 17 March 2026 01:20:00 +0000 (0:00:00.247) 0:00:05.716 ********* 2026-03-17 01:20:10.688122 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:20:10.688127 | orchestrator | 2026-03-17 01:20:10.688131 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-17 01:20:10.688135 | orchestrator | Tuesday 17 March 2026 01:20:00 +0000 (0:00:00.237) 0:00:05.953 ********* 2026-03-17 01:20:10.688140 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:20:10.688144 | orchestrator | 2026-03-17 01:20:10.688149 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-17 01:20:10.688159 | orchestrator | Tuesday 17 March 2026 01:20:01 +0000 (0:00:00.236) 0:00:06.190 ********* 2026-03-17 01:20:10.688164 | orchestrator | 2026-03-17 01:20:10.688168 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-17 01:20:10.688173 | orchestrator | Tuesday 17 March 2026 01:20:01 +0000 (0:00:00.067) 0:00:06.257 ********* 2026-03-17 01:20:10.688177 | orchestrator | 2026-03-17 01:20:10.688181 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-17 01:20:10.688186 | orchestrator | Tuesday 17 March 2026 01:20:01 +0000 (0:00:00.068) 0:00:06.326 ********* 2026-03-17 01:20:10.688190 | orchestrator | 2026-03-17 01:20:10.688194 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-17 01:20:10.688198 | orchestrator | Tuesday 17 March 2026 01:20:01 +0000 (0:00:00.070) 0:00:06.397 ********* 2026-03-17 01:20:10.688203 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:20:10.688207 | orchestrator | 2026-03-17 01:20:10.688212 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-17 01:20:10.688217 | orchestrator | Tuesday 17 March 2026 01:20:01 +0000 (0:00:00.239) 0:00:06.636 ********* 2026-03-17 01:20:10.688221 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:20:10.688226 | orchestrator | 2026-03-17 01:20:10.688244 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-03-17 01:20:10.688249 | orchestrator | Tuesday 17 March 2026 01:20:01 +0000 (0:00:00.238) 0:00:06.875 ********* 2026-03-17 01:20:10.688254 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:20:10.688258 | orchestrator | 2026-03-17 01:20:10.688263 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-03-17 01:20:10.688267 | orchestrator | Tuesday 17 March 2026 01:20:01 +0000 (0:00:00.099) 0:00:06.974 ********* 2026-03-17 01:20:10.688272 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:20:10.688276 | orchestrator | 2026-03-17 01:20:10.688281 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-03-17 01:20:10.688285 | orchestrator | Tuesday 17 March 2026 01:20:03 +0000 (0:00:01.860) 0:00:08.835 ********* 2026-03-17 01:20:10.688290 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:20:10.688294 | orchestrator | 2026-03-17 01:20:10.688299 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-03-17 01:20:10.688303 | orchestrator | Tuesday 17 March 2026 01:20:04 +0000 (0:00:00.457) 0:00:09.292 ********* 2026-03-17 01:20:10.688307 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:20:10.688312 | orchestrator | 2026-03-17 01:20:10.688316 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-03-17 01:20:10.688321 | orchestrator | Tuesday 17 March 2026 01:20:04 +0000 (0:00:00.129) 0:00:09.421 ********* 2026-03-17 01:20:10.688325 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:20:10.688329 | orchestrator | 2026-03-17 01:20:10.688334 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-03-17 01:20:10.688338 | orchestrator | Tuesday 17 March 2026 01:20:04 +0000 (0:00:00.295) 0:00:09.717 ********* 2026-03-17 01:20:10.688342 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:20:10.688347 | orchestrator | 2026-03-17 01:20:10.688351 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-03-17 01:20:10.688356 | orchestrator | Tuesday 17 March 2026 01:20:04 +0000 (0:00:00.288) 0:00:10.006 ********* 2026-03-17 01:20:10.688360 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:20:10.688364 | orchestrator | 2026-03-17 01:20:10.688368 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-03-17 01:20:10.688373 | orchestrator | Tuesday 17 March 2026 01:20:05 +0000 (0:00:00.108) 0:00:10.114 ********* 2026-03-17 01:20:10.688377 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:20:10.688381 | orchestrator | 2026-03-17 01:20:10.688386 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-03-17 01:20:10.688390 | orchestrator | Tuesday 17 March 2026 01:20:05 +0000 (0:00:00.116) 0:00:10.231 ********* 2026-03-17 01:20:10.688394 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:20:10.688404 | orchestrator | 2026-03-17 01:20:10.688409 | orchestrator | TASK [Gather status data] ****************************************************** 2026-03-17 01:20:10.688413 | orchestrator | Tuesday 17 March 2026 01:20:05 +0000 (0:00:00.143) 0:00:10.375 ********* 2026-03-17 01:20:10.688417 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:20:10.688422 | orchestrator | 2026-03-17 01:20:10.688426 | orchestrator | TASK [Set health test data] **************************************************** 2026-03-17 01:20:10.688431 | orchestrator | Tuesday 17 March 2026 01:20:06 +0000 (0:00:01.505) 0:00:11.881 ********* 2026-03-17 01:20:10.688435 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:20:10.688440 | orchestrator | 2026-03-17 01:20:10.688444 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-03-17 01:20:10.688449 | orchestrator | Tuesday 17 March 2026 01:20:07 +0000 (0:00:00.290) 0:00:12.171 ********* 2026-03-17 01:20:10.688453 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:20:10.688458 | orchestrator | 2026-03-17 01:20:10.688464 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-03-17 01:20:10.688488 | orchestrator | Tuesday 17 March 2026 01:20:07 +0000 (0:00:00.135) 0:00:12.306 ********* 2026-03-17 01:20:10.688495 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:20:10.688500 | orchestrator | 2026-03-17 01:20:10.688508 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-03-17 01:20:10.688517 | orchestrator | Tuesday 17 March 2026 01:20:07 +0000 (0:00:00.138) 0:00:12.445 ********* 2026-03-17 01:20:10.688523 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:20:10.688529 | orchestrator | 2026-03-17 01:20:10.688535 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-03-17 01:20:10.688541 | orchestrator | Tuesday 17 March 2026 01:20:07 +0000 (0:00:00.132) 0:00:12.577 ********* 2026-03-17 01:20:10.688547 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:20:10.688553 | orchestrator | 2026-03-17 01:20:10.688559 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-17 01:20:10.688565 | orchestrator | Tuesday 17 March 2026 01:20:07 +0000 (0:00:00.295) 0:00:12.873 ********* 2026-03-17 01:20:10.688571 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-17 01:20:10.688577 | orchestrator | 2026-03-17 01:20:10.688583 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-17 01:20:10.688588 | orchestrator | Tuesday 17 March 2026 01:20:08 +0000 (0:00:00.255) 0:00:13.129 ********* 2026-03-17 01:20:10.688593 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:20:10.688600 | orchestrator | 2026-03-17 01:20:10.688615 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-17 01:20:10.688621 | orchestrator | Tuesday 17 March 2026 01:20:08 +0000 (0:00:00.244) 0:00:13.373 ********* 2026-03-17 01:20:10.688627 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-17 01:20:10.688634 | orchestrator | 2026-03-17 01:20:10.688641 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-17 01:20:10.688650 | orchestrator | Tuesday 17 March 2026 01:20:09 +0000 (0:00:01.626) 0:00:15.000 ********* 2026-03-17 01:20:10.688657 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-17 01:20:10.688663 | orchestrator | 2026-03-17 01:20:10.688669 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-17 01:20:10.688675 | orchestrator | Tuesday 17 March 2026 01:20:10 +0000 (0:00:00.264) 0:00:15.265 ********* 2026-03-17 01:20:10.688681 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-17 01:20:10.688687 | orchestrator | 2026-03-17 01:20:10.688699 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-17 01:20:13.133047 | orchestrator | Tuesday 17 March 2026 01:20:10 +0000 (0:00:00.264) 0:00:15.530 ********* 2026-03-17 01:20:13.133118 | orchestrator | 2026-03-17 01:20:13.133124 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-17 01:20:13.133129 | orchestrator | Tuesday 17 March 2026 01:20:10 +0000 (0:00:00.071) 0:00:15.601 ********* 2026-03-17 01:20:13.133150 | orchestrator | 2026-03-17 01:20:13.133155 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-17 01:20:13.133159 | orchestrator | Tuesday 17 March 2026 01:20:10 +0000 (0:00:00.067) 0:00:15.669 ********* 2026-03-17 01:20:13.133163 | orchestrator | 2026-03-17 01:20:13.133167 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-17 01:20:13.133171 | orchestrator | Tuesday 17 March 2026 01:20:10 +0000 (0:00:00.072) 0:00:15.741 ********* 2026-03-17 01:20:13.133176 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-17 01:20:13.133180 | orchestrator | 2026-03-17 01:20:13.133184 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-17 01:20:13.133188 | orchestrator | Tuesday 17 March 2026 01:20:12 +0000 (0:00:01.394) 0:00:17.136 ********* 2026-03-17 01:20:13.133191 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-17 01:20:13.133195 | orchestrator |  "msg": [ 2026-03-17 01:20:13.133200 | orchestrator |  "Validator run completed.", 2026-03-17 01:20:13.133204 | orchestrator |  "You can find the report file here:", 2026-03-17 01:20:13.133208 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-03-17T01:19:56+00:00-report.json", 2026-03-17 01:20:13.133213 | orchestrator |  "on the following host:", 2026-03-17 01:20:13.133217 | orchestrator |  "testbed-manager" 2026-03-17 01:20:13.133221 | orchestrator |  ] 2026-03-17 01:20:13.133225 | orchestrator | } 2026-03-17 01:20:13.133229 | orchestrator | 2026-03-17 01:20:13.133233 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:20:13.133238 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-17 01:20:13.133244 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 01:20:13.133249 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 01:20:13.133252 | orchestrator | 2026-03-17 01:20:13.133256 | orchestrator | 2026-03-17 01:20:13.133260 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:20:13.133264 | orchestrator | Tuesday 17 March 2026 01:20:12 +0000 (0:00:00.759) 0:00:17.896 ********* 2026-03-17 01:20:13.133268 | orchestrator | =============================================================================== 2026-03-17 01:20:13.133272 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.86s 2026-03-17 01:20:13.133276 | orchestrator | Aggregate test results step one ----------------------------------------- 1.63s 2026-03-17 01:20:13.133279 | orchestrator | Gather status data ------------------------------------------------------ 1.51s 2026-03-17 01:20:13.133283 | orchestrator | Write report file ------------------------------------------------------- 1.39s 2026-03-17 01:20:13.133287 | orchestrator | Get container info ------------------------------------------------------ 1.08s 2026-03-17 01:20:13.133291 | orchestrator | Create report output directory ------------------------------------------ 0.93s 2026-03-17 01:20:13.133306 | orchestrator | Get timestamp for report file ------------------------------------------- 0.79s 2026-03-17 01:20:13.133309 | orchestrator | Print report file information ------------------------------------------- 0.76s 2026-03-17 01:20:13.133313 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.50s 2026-03-17 01:20:13.133317 | orchestrator | Set test result to passed if container is existing ---------------------- 0.46s 2026-03-17 01:20:13.133321 | orchestrator | Set quorum test data ---------------------------------------------------- 0.46s 2026-03-17 01:20:13.133325 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.30s 2026-03-17 01:20:13.133329 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.30s 2026-03-17 01:20:13.133332 | orchestrator | Set health test data ---------------------------------------------------- 0.29s 2026-03-17 01:20:13.133341 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.29s 2026-03-17 01:20:13.133345 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.29s 2026-03-17 01:20:13.133348 | orchestrator | Prepare test data ------------------------------------------------------- 0.29s 2026-03-17 01:20:13.133352 | orchestrator | Set test result to failed if container is missing ----------------------- 0.29s 2026-03-17 01:20:13.133356 | orchestrator | Prepare test data for container existance test -------------------------- 0.27s 2026-03-17 01:20:13.133360 | orchestrator | Aggregate test results step three --------------------------------------- 0.26s 2026-03-17 01:20:13.398197 | orchestrator | + osism validate ceph-mgrs 2026-03-17 01:20:43.869377 | orchestrator | 2026-03-17 01:20:43.869443 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-03-17 01:20:43.869451 | orchestrator | 2026-03-17 01:20:43.869456 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-17 01:20:43.869461 | orchestrator | Tuesday 17 March 2026 01:20:29 +0000 (0:00:00.423) 0:00:00.423 ********* 2026-03-17 01:20:43.869467 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-17 01:20:43.869471 | orchestrator | 2026-03-17 01:20:43.869476 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-17 01:20:43.869480 | orchestrator | Tuesday 17 March 2026 01:20:30 +0000 (0:00:00.805) 0:00:01.229 ********* 2026-03-17 01:20:43.869485 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-17 01:20:43.869489 | orchestrator | 2026-03-17 01:20:43.869493 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-17 01:20:43.869497 | orchestrator | Tuesday 17 March 2026 01:20:31 +0000 (0:00:00.924) 0:00:02.153 ********* 2026-03-17 01:20:43.869502 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:20:43.869507 | orchestrator | 2026-03-17 01:20:43.869511 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-17 01:20:43.869515 | orchestrator | Tuesday 17 March 2026 01:20:31 +0000 (0:00:00.125) 0:00:02.279 ********* 2026-03-17 01:20:43.869520 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:20:43.869525 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:20:43.869529 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:20:43.869534 | orchestrator | 2026-03-17 01:20:43.869541 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-17 01:20:43.869574 | orchestrator | Tuesday 17 March 2026 01:20:32 +0000 (0:00:00.268) 0:00:02.548 ********* 2026-03-17 01:20:43.869582 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:20:43.869588 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:20:43.869594 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:20:43.869600 | orchestrator | 2026-03-17 01:20:43.869607 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-17 01:20:43.869613 | orchestrator | Tuesday 17 March 2026 01:20:33 +0000 (0:00:01.070) 0:00:03.618 ********* 2026-03-17 01:20:43.869619 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:20:43.869625 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:20:43.869632 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:20:43.869639 | orchestrator | 2026-03-17 01:20:43.869646 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-17 01:20:43.869652 | orchestrator | Tuesday 17 March 2026 01:20:33 +0000 (0:00:00.294) 0:00:03.913 ********* 2026-03-17 01:20:43.869659 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:20:43.869667 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:20:43.869674 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:20:43.869680 | orchestrator | 2026-03-17 01:20:43.869687 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-17 01:20:43.869694 | orchestrator | Tuesday 17 March 2026 01:20:33 +0000 (0:00:00.449) 0:00:04.362 ********* 2026-03-17 01:20:43.869701 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:20:43.869707 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:20:43.869714 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:20:43.869721 | orchestrator | 2026-03-17 01:20:43.869748 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-03-17 01:20:43.869756 | orchestrator | Tuesday 17 March 2026 01:20:34 +0000 (0:00:00.294) 0:00:04.657 ********* 2026-03-17 01:20:43.869764 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:20:43.869771 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:20:43.869778 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:20:43.869785 | orchestrator | 2026-03-17 01:20:43.869793 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-03-17 01:20:43.869797 | orchestrator | Tuesday 17 March 2026 01:20:34 +0000 (0:00:00.274) 0:00:04.932 ********* 2026-03-17 01:20:43.869801 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:20:43.869806 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:20:43.869810 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:20:43.869814 | orchestrator | 2026-03-17 01:20:43.869818 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-17 01:20:43.869823 | orchestrator | Tuesday 17 March 2026 01:20:34 +0000 (0:00:00.439) 0:00:05.371 ********* 2026-03-17 01:20:43.869827 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:20:43.869831 | orchestrator | 2026-03-17 01:20:43.869835 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-17 01:20:43.869840 | orchestrator | Tuesday 17 March 2026 01:20:35 +0000 (0:00:00.234) 0:00:05.605 ********* 2026-03-17 01:20:43.869844 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:20:43.869848 | orchestrator | 2026-03-17 01:20:43.869863 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-17 01:20:43.869867 | orchestrator | Tuesday 17 March 2026 01:20:35 +0000 (0:00:00.243) 0:00:05.849 ********* 2026-03-17 01:20:43.869871 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:20:43.869875 | orchestrator | 2026-03-17 01:20:43.869879 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-17 01:20:43.869883 | orchestrator | Tuesday 17 March 2026 01:20:35 +0000 (0:00:00.235) 0:00:06.085 ********* 2026-03-17 01:20:43.869888 | orchestrator | 2026-03-17 01:20:43.869892 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-17 01:20:43.869896 | orchestrator | Tuesday 17 March 2026 01:20:35 +0000 (0:00:00.069) 0:00:06.155 ********* 2026-03-17 01:20:43.869900 | orchestrator | 2026-03-17 01:20:43.869904 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-17 01:20:43.869908 | orchestrator | Tuesday 17 March 2026 01:20:35 +0000 (0:00:00.068) 0:00:06.223 ********* 2026-03-17 01:20:43.869912 | orchestrator | 2026-03-17 01:20:43.869916 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-17 01:20:43.869920 | orchestrator | Tuesday 17 March 2026 01:20:35 +0000 (0:00:00.070) 0:00:06.293 ********* 2026-03-17 01:20:43.869925 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:20:43.869932 | orchestrator | 2026-03-17 01:20:43.869941 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-17 01:20:43.869952 | orchestrator | Tuesday 17 March 2026 01:20:36 +0000 (0:00:00.233) 0:00:06.527 ********* 2026-03-17 01:20:43.869958 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:20:43.869965 | orchestrator | 2026-03-17 01:20:43.869987 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-03-17 01:20:43.869994 | orchestrator | Tuesday 17 March 2026 01:20:36 +0000 (0:00:00.231) 0:00:06.758 ********* 2026-03-17 01:20:43.870000 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:20:43.870006 | orchestrator | 2026-03-17 01:20:43.870048 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-03-17 01:20:43.870058 | orchestrator | Tuesday 17 March 2026 01:20:36 +0000 (0:00:00.106) 0:00:06.865 ********* 2026-03-17 01:20:43.870062 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:20:43.870067 | orchestrator | 2026-03-17 01:20:43.870074 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-03-17 01:20:43.870081 | orchestrator | Tuesday 17 March 2026 01:20:38 +0000 (0:00:02.250) 0:00:09.116 ********* 2026-03-17 01:20:43.870088 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:20:43.870104 | orchestrator | 2026-03-17 01:20:43.870111 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-03-17 01:20:43.870118 | orchestrator | Tuesday 17 March 2026 01:20:39 +0000 (0:00:00.412) 0:00:09.529 ********* 2026-03-17 01:20:43.870125 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:20:43.870132 | orchestrator | 2026-03-17 01:20:43.870140 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-03-17 01:20:43.870147 | orchestrator | Tuesday 17 March 2026 01:20:39 +0000 (0:00:00.305) 0:00:09.834 ********* 2026-03-17 01:20:43.870154 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:20:43.870161 | orchestrator | 2026-03-17 01:20:43.870167 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-03-17 01:20:43.870173 | orchestrator | Tuesday 17 March 2026 01:20:39 +0000 (0:00:00.139) 0:00:09.974 ********* 2026-03-17 01:20:43.870180 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:20:43.870187 | orchestrator | 2026-03-17 01:20:43.870194 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-17 01:20:43.870201 | orchestrator | Tuesday 17 March 2026 01:20:39 +0000 (0:00:00.138) 0:00:10.112 ********* 2026-03-17 01:20:43.870208 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-17 01:20:43.870216 | orchestrator | 2026-03-17 01:20:43.870223 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-17 01:20:43.870230 | orchestrator | Tuesday 17 March 2026 01:20:39 +0000 (0:00:00.241) 0:00:10.354 ********* 2026-03-17 01:20:43.870236 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:20:43.870243 | orchestrator | 2026-03-17 01:20:43.870250 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-17 01:20:43.870257 | orchestrator | Tuesday 17 March 2026 01:20:40 +0000 (0:00:00.243) 0:00:10.597 ********* 2026-03-17 01:20:43.870263 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-17 01:20:43.870270 | orchestrator | 2026-03-17 01:20:43.870277 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-17 01:20:43.870284 | orchestrator | Tuesday 17 March 2026 01:20:41 +0000 (0:00:01.216) 0:00:11.814 ********* 2026-03-17 01:20:43.870291 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-17 01:20:43.870297 | orchestrator | 2026-03-17 01:20:43.870304 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-17 01:20:43.870311 | orchestrator | Tuesday 17 March 2026 01:20:41 +0000 (0:00:00.235) 0:00:12.049 ********* 2026-03-17 01:20:43.870318 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-17 01:20:43.870325 | orchestrator | 2026-03-17 01:20:43.870332 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-17 01:20:43.870339 | orchestrator | Tuesday 17 March 2026 01:20:41 +0000 (0:00:00.241) 0:00:12.291 ********* 2026-03-17 01:20:43.870345 | orchestrator | 2026-03-17 01:20:43.870352 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-17 01:20:43.870359 | orchestrator | Tuesday 17 March 2026 01:20:41 +0000 (0:00:00.069) 0:00:12.360 ********* 2026-03-17 01:20:43.870366 | orchestrator | 2026-03-17 01:20:43.870372 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-17 01:20:43.870379 | orchestrator | Tuesday 17 March 2026 01:20:41 +0000 (0:00:00.066) 0:00:12.427 ********* 2026-03-17 01:20:43.870386 | orchestrator | 2026-03-17 01:20:43.870392 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-17 01:20:43.870399 | orchestrator | Tuesday 17 March 2026 01:20:42 +0000 (0:00:00.234) 0:00:12.661 ********* 2026-03-17 01:20:43.870406 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-17 01:20:43.870413 | orchestrator | 2026-03-17 01:20:43.870420 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-17 01:20:43.870433 | orchestrator | Tuesday 17 March 2026 01:20:43 +0000 (0:00:01.288) 0:00:13.950 ********* 2026-03-17 01:20:43.870440 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-17 01:20:43.870453 | orchestrator |  "msg": [ 2026-03-17 01:20:43.870459 | orchestrator |  "Validator run completed.", 2026-03-17 01:20:43.870466 | orchestrator |  "You can find the report file here:", 2026-03-17 01:20:43.870483 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-03-17T01:20:30+00:00-report.json", 2026-03-17 01:20:43.870491 | orchestrator |  "on the following host:", 2026-03-17 01:20:43.870506 | orchestrator |  "testbed-manager" 2026-03-17 01:20:43.870513 | orchestrator |  ] 2026-03-17 01:20:43.870519 | orchestrator | } 2026-03-17 01:20:43.870526 | orchestrator | 2026-03-17 01:20:43.870532 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:20:43.870540 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-17 01:20:43.870562 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 01:20:43.870579 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 01:20:44.168412 | orchestrator | 2026-03-17 01:20:44.168490 | orchestrator | 2026-03-17 01:20:44.168496 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:20:44.168503 | orchestrator | Tuesday 17 March 2026 01:20:43 +0000 (0:00:00.403) 0:00:14.353 ********* 2026-03-17 01:20:44.168507 | orchestrator | =============================================================================== 2026-03-17 01:20:44.168511 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.25s 2026-03-17 01:20:44.168516 | orchestrator | Write report file ------------------------------------------------------- 1.29s 2026-03-17 01:20:44.168520 | orchestrator | Aggregate test results step one ----------------------------------------- 1.22s 2026-03-17 01:20:44.168524 | orchestrator | Get container info ------------------------------------------------------ 1.07s 2026-03-17 01:20:44.168528 | orchestrator | Create report output directory ------------------------------------------ 0.92s 2026-03-17 01:20:44.168531 | orchestrator | Get timestamp for report file ------------------------------------------- 0.81s 2026-03-17 01:20:44.168535 | orchestrator | Set test result to passed if container is existing ---------------------- 0.45s 2026-03-17 01:20:44.168539 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.44s 2026-03-17 01:20:44.168542 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.41s 2026-03-17 01:20:44.168564 | orchestrator | Print report file information ------------------------------------------- 0.40s 2026-03-17 01:20:44.168568 | orchestrator | Flush handlers ---------------------------------------------------------- 0.37s 2026-03-17 01:20:44.168572 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.31s 2026-03-17 01:20:44.168575 | orchestrator | Prepare test data ------------------------------------------------------- 0.29s 2026-03-17 01:20:44.168579 | orchestrator | Set test result to failed if container is missing ----------------------- 0.29s 2026-03-17 01:20:44.168583 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.27s 2026-03-17 01:20:44.168587 | orchestrator | Prepare test data for container existance test -------------------------- 0.27s 2026-03-17 01:20:44.168591 | orchestrator | Aggregate test results step two ----------------------------------------- 0.24s 2026-03-17 01:20:44.168594 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.24s 2026-03-17 01:20:44.168598 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.24s 2026-03-17 01:20:44.168602 | orchestrator | Aggregate test results step three --------------------------------------- 0.24s 2026-03-17 01:20:44.524517 | orchestrator | + osism validate ceph-osds 2026-03-17 01:21:05.358962 | orchestrator | 2026-03-17 01:21:05.359045 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-03-17 01:21:05.359054 | orchestrator | 2026-03-17 01:21:05.359077 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-17 01:21:05.359085 | orchestrator | Tuesday 17 March 2026 01:21:01 +0000 (0:00:00.418) 0:00:00.418 ********* 2026-03-17 01:21:05.359092 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-17 01:21:05.359099 | orchestrator | 2026-03-17 01:21:05.359106 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-17 01:21:05.359112 | orchestrator | Tuesday 17 March 2026 01:21:01 +0000 (0:00:00.847) 0:00:01.266 ********* 2026-03-17 01:21:05.359118 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-17 01:21:05.359125 | orchestrator | 2026-03-17 01:21:05.359131 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-17 01:21:05.359137 | orchestrator | Tuesday 17 March 2026 01:21:02 +0000 (0:00:00.519) 0:00:01.786 ********* 2026-03-17 01:21:05.359144 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-17 01:21:05.359150 | orchestrator | 2026-03-17 01:21:05.359156 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-17 01:21:05.359162 | orchestrator | Tuesday 17 March 2026 01:21:03 +0000 (0:00:00.688) 0:00:02.474 ********* 2026-03-17 01:21:05.359168 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:21:05.359176 | orchestrator | 2026-03-17 01:21:05.359182 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-17 01:21:05.359193 | orchestrator | Tuesday 17 March 2026 01:21:03 +0000 (0:00:00.124) 0:00:02.599 ********* 2026-03-17 01:21:05.359199 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:21:05.359206 | orchestrator | 2026-03-17 01:21:05.359212 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-17 01:21:05.359218 | orchestrator | Tuesday 17 March 2026 01:21:03 +0000 (0:00:00.142) 0:00:02.741 ********* 2026-03-17 01:21:05.359224 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:21:05.359230 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:21:05.359236 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:21:05.359243 | orchestrator | 2026-03-17 01:21:05.359249 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-17 01:21:05.359255 | orchestrator | Tuesday 17 March 2026 01:21:03 +0000 (0:00:00.292) 0:00:03.033 ********* 2026-03-17 01:21:05.359261 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:21:05.359267 | orchestrator | 2026-03-17 01:21:05.359273 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-17 01:21:05.359279 | orchestrator | Tuesday 17 March 2026 01:21:03 +0000 (0:00:00.146) 0:00:03.179 ********* 2026-03-17 01:21:05.359286 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:21:05.359292 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:21:05.359298 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:21:05.359304 | orchestrator | 2026-03-17 01:21:05.359310 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-03-17 01:21:05.359316 | orchestrator | Tuesday 17 March 2026 01:21:04 +0000 (0:00:00.304) 0:00:03.484 ********* 2026-03-17 01:21:05.359323 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:21:05.359329 | orchestrator | 2026-03-17 01:21:05.359335 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-17 01:21:05.359341 | orchestrator | Tuesday 17 March 2026 01:21:04 +0000 (0:00:00.692) 0:00:04.177 ********* 2026-03-17 01:21:05.359347 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:21:05.359353 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:21:05.359359 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:21:05.359365 | orchestrator | 2026-03-17 01:21:05.359372 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-03-17 01:21:05.359377 | orchestrator | Tuesday 17 March 2026 01:21:05 +0000 (0:00:00.264) 0:00:04.441 ********* 2026-03-17 01:21:05.359386 | orchestrator | skipping: [testbed-node-3] => (item={'id': '49b46459a09a3c2a412e886c73a76e22ba89c374ee4f341babcf1046be3a2fca', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-03-17 01:21:05.359399 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9533e00a08935b4dde6036d12bd59e1de5bca6b1b7d44f84b298b3948c7af6ea', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-03-17 01:21:05.359405 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2be28e21f6c30ba604f0c9861163d8748048c4a5e3faa2cc82a9f5d1527e9207', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-03-17 01:21:05.359413 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f7009f061c55790a589f922b5ba5c414c5fa46233ad08ac43dd32a928b281699', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-03-17 01:21:05.359420 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1ef2c26693b9827d27396d83336192789e66cddc64264523e280962f7015eac1', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 12 minutes'})  2026-03-17 01:21:05.359439 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6b1fdd67c88797398f2f596ba15355ffe2862e62c5b64cff33789facb7656140', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2026-03-17 01:21:05.359446 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e9a8f8a193c63833cb259b688d2efc5a2315d2e6a1cd1b6a220d361e80ba8d8e', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2026-03-17 01:21:05.359452 | orchestrator | skipping: [testbed-node-3] => (item={'id': '77d2fe78ea17fb4324f8f8a20817e09638b808d884942aa247538e8c4e6ca5b9', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2026-03-17 01:21:05.359459 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8cd97a1c5cefa7a2862f739ff9f6a307051222db44c82980fc6af7fbd12a87c5', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2026-03-17 01:21:05.359469 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd50fe0b811cb9a08d3a47fe55f2ad2ecfa2778c5e81fcb56e967b2c6fce2f2bd', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2026-03-17 01:21:05.359478 | orchestrator | ok: [testbed-node-3] => (item={'id': 'e5ad9ef47f1b639b3e1858723d344ebf55cd6bdeedaeaa9a07f16df5419f1853', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-03-17 01:21:05.359485 | orchestrator | ok: [testbed-node-3] => (item={'id': 'fd7a96f690247a7bbb0c7d6e9271fe77e0e78a99859bc83cf6c1068fbf17b434', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-03-17 01:21:05.359490 | orchestrator | skipping: [testbed-node-3] => (item={'id': '96fadbe5f9c708f611c042cb888ddfa7f5d1108a337a6fc37800eea9922ce8cb', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2026-03-17 01:21:05.359495 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5fe5279a0c2aa6a2790f6c631dda3abb2a1035acb1b46c911649e0a4af0aa9e8', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-03-17 01:21:05.359499 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5164f9319330d3d4e160defa0583b8e55ba057c525c8882871cdf9818a6c2e82', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-03-17 01:21:05.359523 | orchestrator | skipping: [testbed-node-3] => (item={'id': '16f7e6111e85c3d5567b5ed793cd5036f171b356b0635a207901d7108a38b67a', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2026-03-17 01:21:05.359527 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1f9e6efa184b216a5f774c1f66fca3fe6fb532f51cb13d23835b2570da9dcede', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2026-03-17 01:21:05.359531 | orchestrator | skipping: [testbed-node-3] => (item={'id': '16f70417fac0a4517f118c668ce7988134aa17c47f443c930f951e6c2ead1b8f', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2026-03-17 01:21:05.359535 | orchestrator | skipping: [testbed-node-4] => (item={'id': '90b5c40424efd83aea8d9d23b6111f96ccd40ca25376665494239cbaf052298c', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-03-17 01:21:05.359539 | orchestrator | skipping: [testbed-node-4] => (item={'id': '265cf76ffa468cd7ec42f73f39591e981242e9ced69375451c0d5f99e79ef282', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-03-17 01:21:05.359546 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fe3e94513fdcd9f201ed92adebd800e3faec6ab26720273a2d875ab7cc28222a', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-03-17 01:21:05.602780 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fd86365f95d75b4d270bd881c4dcfd0d874743a9b19a07cbdc54c801a8944576', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-03-17 01:21:05.602864 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ef50df3abf36373cb7aa5182d8fdbcc7e29257ffe0be8819dbbf6f686ffc495c', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 12 minutes'})  2026-03-17 01:21:05.602875 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4071e6660a12c686a7ca2f99cb42d40e04ab0695971343c4fcb2033af38f9091', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2026-03-17 01:21:05.602885 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8fe0a21c5805e4ceebec1ae017ef626f021f466dfac6fdad3b8c5e7e298baeea', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2026-03-17 01:21:05.602892 | orchestrator | skipping: [testbed-node-4] => (item={'id': '14c8375f099398541e21bb5349aeb47439d22ae6438c17c678c1a0f14454795d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2026-03-17 01:21:05.602900 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'dfd78f975d890621345937131926a5354ab26044b85f112eddd712ecc3a59c40', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2026-03-17 01:21:05.602926 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8c79b35a81b054d17688591a7ece4b30a60a2a5895d0276593d349a5097b5f09', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2026-03-17 01:21:05.602951 | orchestrator | ok: [testbed-node-4] => (item={'id': 'e7cc95fad114de0ab92d4384abb33bfd7bcf42d8c830edd6f797c4393d1cdca0', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-03-17 01:21:05.602959 | orchestrator | ok: [testbed-node-4] => (item={'id': '5cf0dc9f87502aab1097db7523682d4ccf4a3b0b4659fbf7ddee7627e6586c4b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-03-17 01:21:05.602965 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0ec21d03dafd47a13f1a742d4d9678c59a94840f54798051b29c64bc406af5c1', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2026-03-17 01:21:05.602971 | orchestrator | skipping: [testbed-node-4] => (item={'id': '15a017f5b9081af2a18c19a1f014c2b7a50814ff0565c69ac2b71b05d3d93e42', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-03-17 01:21:05.602978 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6ba7a0fa12ce10655147e207590f57eaf028ebf2a65e206efb8a4538ac179128', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-03-17 01:21:05.602985 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'df5285402c1c55d9054a634c2932367f5db3a18ea0f491bb67e64ef850f365bc', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2026-03-17 01:21:05.602992 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b3ed81774c89a2c0f5bd9c4631df65fcba51c090f82cb4965594a829a57c1a63', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2026-03-17 01:21:05.603013 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2d7b5253a85f67041759029e0c623cdbb585b8a1a3e2276bb581875110793476', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2026-03-17 01:21:05.603020 | orchestrator | skipping: [testbed-node-5] => (item={'id': '03d1dcc41fd538d7f72268dbeb767618ec16a0d622bf489460ff5b8f98311b82', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-03-17 01:21:05.603028 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9b70feedb77f3d1b56bd58ec6565f941a2af7315c1064edd76b01096adce766f', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-03-17 01:21:05.603034 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e2788eb2f7c8b8f8fe51c7b06df36a60c9e19a66d6ee2b11ae0cd49bf32a0627', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-03-17 01:21:05.603044 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'db76ffdc602ecd4e45b46632df990a25d32aa6b33253ff6950732a4c225164d2', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-03-17 01:21:05.603051 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3ef6b98c568fc2bca9006427e8d9766e7254cc1448f8cc56c40f85a5322c121d', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 12 minutes'})  2026-03-17 01:21:05.603058 | orchestrator | skipping: [testbed-node-5] => (item={'id': '19a585b163f4a3dd8e9cf615c73d34527d4874dc54101d11aa212c04ee37e26c', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2026-03-17 01:21:05.603069 | orchestrator | skipping: [testbed-node-5] => (item={'id': '28766b25ed60fca00d5c9548d78af158b3854b7c71f2a420192e72dc27ad8652', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2026-03-17 01:21:05.603076 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7accdafe7d5fce7cb84a313b86eba5432259d47fb2aae0219e55ffb3a3757ecd', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2026-03-17 01:21:05.603082 | orchestrator | skipping: [testbed-node-5] => (item={'id': '889aa7a72f20cfa0db88c9b530f0a19943553d42a5824ec3719ed5f417775e1b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2026-03-17 01:21:05.603089 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3202128f722da6a8fb6477ee063b83d6bd75143e1151155ddc193f6b0eec8f3b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2026-03-17 01:21:05.603095 | orchestrator | ok: [testbed-node-5] => (item={'id': 'a1283eccf188740b08a6529442b3fddb294c47e004c2087428cb55aab8205ad0', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-03-17 01:21:05.603101 | orchestrator | ok: [testbed-node-5] => (item={'id': '58bd9ea00a1b44a83ba904fe164f85a46b0e6a5d03d6df9fe126249e48878af0', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-03-17 01:21:05.603108 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f6bfd94dd492cc0e1392c7e1462e9fe53875c008e09283c8bfa26eb72eb84977', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2026-03-17 01:21:05.603114 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b245dd048df053e65f6e64bf1ec91195378cc1cf61ff61d58289249ed2818816', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-03-17 01:21:05.603126 | orchestrator | skipping: [testbed-node-5] => (item={'id': '01fc3523ef357e9cad0e9af4b949ea7118a954f57274471fe80546f399ecb071', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-03-17 01:21:17.647971 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2a3a5e9d51c4cf7022420fa62f3ea17cbe1351d5d155762c252465cf6f4fbe8d', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2026-03-17 01:21:17.648051 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f1e4e7689cccd2d8f61d2ffe533a24229a67205063e656c7a3252c12474f5a42', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2026-03-17 01:21:17.648064 | orchestrator | skipping: [testbed-node-5] => (item={'id': '75d66c85866715493a4d57d599a94e97934484dac16e019481c10d1d47aca0ab', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2026-03-17 01:21:17.648071 | orchestrator | 2026-03-17 01:21:17.648079 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-03-17 01:21:17.648101 | orchestrator | Tuesday 17 March 2026 01:21:05 +0000 (0:00:00.481) 0:00:04.922 ********* 2026-03-17 01:21:17.648127 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:21:17.648132 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:21:17.648136 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:21:17.648140 | orchestrator | 2026-03-17 01:21:17.648144 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-03-17 01:21:17.648148 | orchestrator | Tuesday 17 March 2026 01:21:05 +0000 (0:00:00.290) 0:00:05.213 ********* 2026-03-17 01:21:17.648152 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:21:17.648156 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:21:17.648160 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:21:17.648164 | orchestrator | 2026-03-17 01:21:17.648168 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-03-17 01:21:17.648171 | orchestrator | Tuesday 17 March 2026 01:21:06 +0000 (0:00:00.436) 0:00:05.650 ********* 2026-03-17 01:21:17.648175 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:21:17.648179 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:21:17.648182 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:21:17.648186 | orchestrator | 2026-03-17 01:21:17.648190 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-17 01:21:17.648194 | orchestrator | Tuesday 17 March 2026 01:21:06 +0000 (0:00:00.298) 0:00:05.948 ********* 2026-03-17 01:21:17.648197 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:21:17.648201 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:21:17.648205 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:21:17.648233 | orchestrator | 2026-03-17 01:21:17.648237 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-03-17 01:21:17.648240 | orchestrator | Tuesday 17 March 2026 01:21:06 +0000 (0:00:00.270) 0:00:06.219 ********* 2026-03-17 01:21:17.648244 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-03-17 01:21:17.648261 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-03-17 01:21:17.648265 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:21:17.648268 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-03-17 01:21:17.648273 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-03-17 01:21:17.648276 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:21:17.648280 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-03-17 01:21:17.648284 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-03-17 01:21:17.648288 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:21:17.648292 | orchestrator | 2026-03-17 01:21:17.648295 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-03-17 01:21:17.648299 | orchestrator | Tuesday 17 March 2026 01:21:07 +0000 (0:00:00.311) 0:00:06.530 ********* 2026-03-17 01:21:17.648303 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:21:17.648307 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:21:17.648311 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:21:17.648314 | orchestrator | 2026-03-17 01:21:17.648318 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-17 01:21:17.648322 | orchestrator | Tuesday 17 March 2026 01:21:07 +0000 (0:00:00.443) 0:00:06.974 ********* 2026-03-17 01:21:17.648326 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:21:17.648329 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:21:17.648333 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:21:17.648337 | orchestrator | 2026-03-17 01:21:17.648343 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-17 01:21:17.648350 | orchestrator | Tuesday 17 March 2026 01:21:07 +0000 (0:00:00.293) 0:00:07.267 ********* 2026-03-17 01:21:17.648356 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:21:17.648362 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:21:17.648368 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:21:17.648381 | orchestrator | 2026-03-17 01:21:17.648387 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-03-17 01:21:17.648393 | orchestrator | Tuesday 17 March 2026 01:21:08 +0000 (0:00:00.271) 0:00:07.539 ********* 2026-03-17 01:21:17.648397 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:21:17.648401 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:21:17.648404 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:21:17.648408 | orchestrator | 2026-03-17 01:21:17.648412 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-17 01:21:17.648416 | orchestrator | Tuesday 17 March 2026 01:21:08 +0000 (0:00:00.283) 0:00:07.823 ********* 2026-03-17 01:21:17.648420 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:21:17.648424 | orchestrator | 2026-03-17 01:21:17.648438 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-17 01:21:17.648442 | orchestrator | Tuesday 17 March 2026 01:21:09 +0000 (0:00:00.597) 0:00:08.421 ********* 2026-03-17 01:21:17.648446 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:21:17.648450 | orchestrator | 2026-03-17 01:21:17.648454 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-17 01:21:17.648457 | orchestrator | Tuesday 17 March 2026 01:21:09 +0000 (0:00:00.253) 0:00:08.674 ********* 2026-03-17 01:21:17.648461 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:21:17.648465 | orchestrator | 2026-03-17 01:21:17.648468 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-17 01:21:17.648472 | orchestrator | Tuesday 17 March 2026 01:21:09 +0000 (0:00:00.240) 0:00:08.914 ********* 2026-03-17 01:21:17.648476 | orchestrator | 2026-03-17 01:21:17.648480 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-17 01:21:17.648483 | orchestrator | Tuesday 17 March 2026 01:21:09 +0000 (0:00:00.067) 0:00:08.982 ********* 2026-03-17 01:21:17.648487 | orchestrator | 2026-03-17 01:21:17.648491 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-17 01:21:17.648495 | orchestrator | Tuesday 17 March 2026 01:21:09 +0000 (0:00:00.066) 0:00:09.048 ********* 2026-03-17 01:21:17.648498 | orchestrator | 2026-03-17 01:21:17.648502 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-17 01:21:17.648506 | orchestrator | Tuesday 17 March 2026 01:21:09 +0000 (0:00:00.081) 0:00:09.129 ********* 2026-03-17 01:21:17.648510 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:21:17.648514 | orchestrator | 2026-03-17 01:21:17.648518 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-03-17 01:21:17.648522 | orchestrator | Tuesday 17 March 2026 01:21:10 +0000 (0:00:00.264) 0:00:09.394 ********* 2026-03-17 01:21:17.648525 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:21:17.648530 | orchestrator | 2026-03-17 01:21:17.648535 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-17 01:21:17.648539 | orchestrator | Tuesday 17 March 2026 01:21:10 +0000 (0:00:00.302) 0:00:09.696 ********* 2026-03-17 01:21:17.648544 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:21:17.648549 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:21:17.648556 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:21:17.648563 | orchestrator | 2026-03-17 01:21:17.648569 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-03-17 01:21:17.648575 | orchestrator | Tuesday 17 March 2026 01:21:10 +0000 (0:00:00.285) 0:00:09.982 ********* 2026-03-17 01:21:17.648581 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:21:17.648587 | orchestrator | 2026-03-17 01:21:17.648593 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-03-17 01:21:17.648599 | orchestrator | Tuesday 17 March 2026 01:21:11 +0000 (0:00:00.596) 0:00:10.578 ********* 2026-03-17 01:21:17.648606 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-17 01:21:17.648613 | orchestrator | 2026-03-17 01:21:17.648636 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-03-17 01:21:17.648642 | orchestrator | Tuesday 17 March 2026 01:21:12 +0000 (0:00:01.709) 0:00:12.288 ********* 2026-03-17 01:21:17.648653 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:21:17.648662 | orchestrator | 2026-03-17 01:21:17.648669 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-03-17 01:21:17.648675 | orchestrator | Tuesday 17 March 2026 01:21:13 +0000 (0:00:00.143) 0:00:12.431 ********* 2026-03-17 01:21:17.648680 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:21:17.648687 | orchestrator | 2026-03-17 01:21:17.648692 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-03-17 01:21:17.648698 | orchestrator | Tuesday 17 March 2026 01:21:13 +0000 (0:00:00.306) 0:00:12.737 ********* 2026-03-17 01:21:17.648704 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:21:17.648711 | orchestrator | 2026-03-17 01:21:17.648717 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-03-17 01:21:17.648724 | orchestrator | Tuesday 17 March 2026 01:21:13 +0000 (0:00:00.121) 0:00:12.859 ********* 2026-03-17 01:21:17.648730 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:21:17.648736 | orchestrator | 2026-03-17 01:21:17.648744 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-17 01:21:17.648749 | orchestrator | Tuesday 17 March 2026 01:21:13 +0000 (0:00:00.123) 0:00:12.983 ********* 2026-03-17 01:21:17.648753 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:21:17.648758 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:21:17.648762 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:21:17.648767 | orchestrator | 2026-03-17 01:21:17.648771 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-03-17 01:21:17.648776 | orchestrator | Tuesday 17 March 2026 01:21:13 +0000 (0:00:00.276) 0:00:13.259 ********* 2026-03-17 01:21:17.648780 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:21:17.648785 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:21:17.648789 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:21:17.648793 | orchestrator | 2026-03-17 01:21:17.648798 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-03-17 01:21:17.648802 | orchestrator | Tuesday 17 March 2026 01:21:16 +0000 (0:00:02.631) 0:00:15.891 ********* 2026-03-17 01:21:17.648807 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:21:17.648811 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:21:17.648816 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:21:17.648820 | orchestrator | 2026-03-17 01:21:17.648825 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-03-17 01:21:17.648829 | orchestrator | Tuesday 17 March 2026 01:21:16 +0000 (0:00:00.299) 0:00:16.190 ********* 2026-03-17 01:21:17.648834 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:21:17.648838 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:21:17.648842 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:21:17.648847 | orchestrator | 2026-03-17 01:21:17.648851 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-03-17 01:21:17.648856 | orchestrator | Tuesday 17 March 2026 01:21:17 +0000 (0:00:00.486) 0:00:16.676 ********* 2026-03-17 01:21:17.648860 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:21:17.648865 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:21:17.648869 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:21:17.648874 | orchestrator | 2026-03-17 01:21:17.648882 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-03-17 01:21:26.203738 | orchestrator | Tuesday 17 March 2026 01:21:17 +0000 (0:00:00.297) 0:00:16.974 ********* 2026-03-17 01:21:26.203903 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:21:26.203925 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:21:26.203936 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:21:26.203947 | orchestrator | 2026-03-17 01:21:26.203959 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-03-17 01:21:26.203969 | orchestrator | Tuesday 17 March 2026 01:21:18 +0000 (0:00:00.497) 0:00:17.471 ********* 2026-03-17 01:21:26.203980 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:21:26.203993 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:21:26.204004 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:21:26.204041 | orchestrator | 2026-03-17 01:21:26.204053 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-03-17 01:21:26.204064 | orchestrator | Tuesday 17 March 2026 01:21:18 +0000 (0:00:00.287) 0:00:17.759 ********* 2026-03-17 01:21:26.204075 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:21:26.204085 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:21:26.204095 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:21:26.204101 | orchestrator | 2026-03-17 01:21:26.204108 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-17 01:21:26.204120 | orchestrator | Tuesday 17 March 2026 01:21:18 +0000 (0:00:00.329) 0:00:18.089 ********* 2026-03-17 01:21:26.204127 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:21:26.204133 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:21:26.204140 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:21:26.204146 | orchestrator | 2026-03-17 01:21:26.204152 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-03-17 01:21:26.204159 | orchestrator | Tuesday 17 March 2026 01:21:19 +0000 (0:00:00.475) 0:00:18.564 ********* 2026-03-17 01:21:26.204166 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:21:26.204172 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:21:26.204179 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:21:26.204185 | orchestrator | 2026-03-17 01:21:26.204192 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-03-17 01:21:26.204199 | orchestrator | Tuesday 17 March 2026 01:21:19 +0000 (0:00:00.721) 0:00:19.285 ********* 2026-03-17 01:21:26.204205 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:21:26.204212 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:21:26.204218 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:21:26.204225 | orchestrator | 2026-03-17 01:21:26.204231 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-03-17 01:21:26.204238 | orchestrator | Tuesday 17 March 2026 01:21:20 +0000 (0:00:00.348) 0:00:19.634 ********* 2026-03-17 01:21:26.204247 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:21:26.204254 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:21:26.204262 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:21:26.204270 | orchestrator | 2026-03-17 01:21:26.204277 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-03-17 01:21:26.204285 | orchestrator | Tuesday 17 March 2026 01:21:20 +0000 (0:00:00.299) 0:00:19.934 ********* 2026-03-17 01:21:26.204293 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:21:26.204300 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:21:26.204308 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:21:26.204315 | orchestrator | 2026-03-17 01:21:26.204323 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-17 01:21:26.204332 | orchestrator | Tuesday 17 March 2026 01:21:21 +0000 (0:00:00.465) 0:00:20.399 ********* 2026-03-17 01:21:26.204340 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-17 01:21:26.204347 | orchestrator | 2026-03-17 01:21:26.204355 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-17 01:21:26.204363 | orchestrator | Tuesday 17 March 2026 01:21:21 +0000 (0:00:00.257) 0:00:20.657 ********* 2026-03-17 01:21:26.204371 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:21:26.204379 | orchestrator | 2026-03-17 01:21:26.204387 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-17 01:21:26.204399 | orchestrator | Tuesday 17 March 2026 01:21:21 +0000 (0:00:00.244) 0:00:20.901 ********* 2026-03-17 01:21:26.204410 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-17 01:21:26.204420 | orchestrator | 2026-03-17 01:21:26.204432 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-17 01:21:26.204443 | orchestrator | Tuesday 17 March 2026 01:21:23 +0000 (0:00:01.536) 0:00:22.438 ********* 2026-03-17 01:21:26.204454 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-17 01:21:26.204464 | orchestrator | 2026-03-17 01:21:26.204475 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-17 01:21:26.204493 | orchestrator | Tuesday 17 March 2026 01:21:23 +0000 (0:00:00.251) 0:00:22.690 ********* 2026-03-17 01:21:26.204503 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-17 01:21:26.204513 | orchestrator | 2026-03-17 01:21:26.204524 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-17 01:21:26.204533 | orchestrator | Tuesday 17 March 2026 01:21:23 +0000 (0:00:00.254) 0:00:22.945 ********* 2026-03-17 01:21:26.204543 | orchestrator | 2026-03-17 01:21:26.204555 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-17 01:21:26.204566 | orchestrator | Tuesday 17 March 2026 01:21:23 +0000 (0:00:00.068) 0:00:23.013 ********* 2026-03-17 01:21:26.204577 | orchestrator | 2026-03-17 01:21:26.204587 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-17 01:21:26.204597 | orchestrator | Tuesday 17 March 2026 01:21:23 +0000 (0:00:00.078) 0:00:23.091 ********* 2026-03-17 01:21:26.204609 | orchestrator | 2026-03-17 01:21:26.204620 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-17 01:21:26.204632 | orchestrator | Tuesday 17 March 2026 01:21:23 +0000 (0:00:00.078) 0:00:23.170 ********* 2026-03-17 01:21:26.204671 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-17 01:21:26.204678 | orchestrator | 2026-03-17 01:21:26.204685 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-17 01:21:26.204692 | orchestrator | Tuesday 17 March 2026 01:21:25 +0000 (0:00:01.539) 0:00:24.710 ********* 2026-03-17 01:21:26.204720 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-03-17 01:21:26.204728 | orchestrator |  "msg": [ 2026-03-17 01:21:26.204735 | orchestrator |  "Validator run completed.", 2026-03-17 01:21:26.204741 | orchestrator |  "You can find the report file here:", 2026-03-17 01:21:26.204748 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-03-17T01:21:01+00:00-report.json", 2026-03-17 01:21:26.204755 | orchestrator |  "on the following host:", 2026-03-17 01:21:26.204761 | orchestrator |  "testbed-manager" 2026-03-17 01:21:26.204767 | orchestrator |  ] 2026-03-17 01:21:26.204773 | orchestrator | } 2026-03-17 01:21:26.204780 | orchestrator | 2026-03-17 01:21:26.204786 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:21:26.204793 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-17 01:21:26.204801 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-17 01:21:26.204812 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-17 01:21:26.204819 | orchestrator | 2026-03-17 01:21:26.204825 | orchestrator | 2026-03-17 01:21:26.204831 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:21:26.204837 | orchestrator | Tuesday 17 March 2026 01:21:25 +0000 (0:00:00.553) 0:00:25.263 ********* 2026-03-17 01:21:26.204843 | orchestrator | =============================================================================== 2026-03-17 01:21:26.204849 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.63s 2026-03-17 01:21:26.204855 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.71s 2026-03-17 01:21:26.204861 | orchestrator | Write report file ------------------------------------------------------- 1.54s 2026-03-17 01:21:26.204867 | orchestrator | Aggregate test results step one ----------------------------------------- 1.54s 2026-03-17 01:21:26.204874 | orchestrator | Get timestamp for report file ------------------------------------------- 0.85s 2026-03-17 01:21:26.204880 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.72s 2026-03-17 01:21:26.204886 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.69s 2026-03-17 01:21:26.204897 | orchestrator | Create report output directory ------------------------------------------ 0.69s 2026-03-17 01:21:26.204904 | orchestrator | Aggregate test results step one ----------------------------------------- 0.60s 2026-03-17 01:21:26.204910 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.60s 2026-03-17 01:21:26.204916 | orchestrator | Print report file information ------------------------------------------- 0.55s 2026-03-17 01:21:26.204922 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.52s 2026-03-17 01:21:26.204928 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.50s 2026-03-17 01:21:26.204934 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.49s 2026-03-17 01:21:26.204940 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.48s 2026-03-17 01:21:26.204947 | orchestrator | Prepare test data ------------------------------------------------------- 0.48s 2026-03-17 01:21:26.204953 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.47s 2026-03-17 01:21:26.204959 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.44s 2026-03-17 01:21:26.204965 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.44s 2026-03-17 01:21:26.204971 | orchestrator | Calculate sub test expression results ----------------------------------- 0.35s 2026-03-17 01:21:26.487188 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-03-17 01:21:26.495968 | orchestrator | + set -e 2026-03-17 01:21:26.497523 | orchestrator | + source /opt/manager-vars.sh 2026-03-17 01:21:26.497573 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-17 01:21:26.497581 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-17 01:21:26.497588 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-17 01:21:26.497594 | orchestrator | ++ CEPH_VERSION=reef 2026-03-17 01:21:26.497601 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-17 01:21:26.497609 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-17 01:21:26.497628 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-17 01:21:26.497653 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-17 01:21:26.497659 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-17 01:21:26.497663 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-17 01:21:26.497667 | orchestrator | ++ export ARA=false 2026-03-17 01:21:26.497671 | orchestrator | ++ ARA=false 2026-03-17 01:21:26.497675 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-17 01:21:26.497679 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-17 01:21:26.497683 | orchestrator | ++ export TEMPEST=true 2026-03-17 01:21:26.497687 | orchestrator | ++ TEMPEST=true 2026-03-17 01:21:26.497691 | orchestrator | ++ export IS_ZUUL=true 2026-03-17 01:21:26.497695 | orchestrator | ++ IS_ZUUL=true 2026-03-17 01:21:26.497699 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.64 2026-03-17 01:21:26.497703 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.64 2026-03-17 01:21:26.497707 | orchestrator | ++ export EXTERNAL_API=false 2026-03-17 01:21:26.497711 | orchestrator | ++ EXTERNAL_API=false 2026-03-17 01:21:26.497714 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-17 01:21:26.497718 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-17 01:21:26.497722 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-17 01:21:26.497726 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-17 01:21:26.497729 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-17 01:21:26.497733 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-17 01:21:26.497737 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-17 01:21:26.497741 | orchestrator | + source /etc/os-release 2026-03-17 01:21:26.497744 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-03-17 01:21:26.497748 | orchestrator | ++ NAME=Ubuntu 2026-03-17 01:21:26.497752 | orchestrator | ++ VERSION_ID=24.04 2026-03-17 01:21:26.497755 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-03-17 01:21:26.497759 | orchestrator | ++ VERSION_CODENAME=noble 2026-03-17 01:21:26.497763 | orchestrator | ++ ID=ubuntu 2026-03-17 01:21:26.497767 | orchestrator | ++ ID_LIKE=debian 2026-03-17 01:21:26.497770 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-03-17 01:21:26.497774 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-03-17 01:21:26.497778 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-03-17 01:21:26.497783 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-03-17 01:21:26.497787 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-03-17 01:21:26.497810 | orchestrator | ++ LOGO=ubuntu-logo 2026-03-17 01:21:26.497814 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-03-17 01:21:26.497818 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-03-17 01:21:26.497823 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-17 01:21:26.511490 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-17 01:21:54.904631 | orchestrator | 2026-03-17 01:21:54.904740 | orchestrator | # Status of Elasticsearch 2026-03-17 01:21:54.904750 | orchestrator | 2026-03-17 01:21:54.904755 | orchestrator | + pushd /opt/configuration/contrib 2026-03-17 01:21:54.904761 | orchestrator | + echo 2026-03-17 01:21:54.904766 | orchestrator | + echo '# Status of Elasticsearch' 2026-03-17 01:21:54.904770 | orchestrator | + echo 2026-03-17 01:21:54.904775 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-03-17 01:21:55.052988 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-03-17 01:21:55.053059 | orchestrator | 2026-03-17 01:21:55.053066 | orchestrator | # Status of MariaDB 2026-03-17 01:21:55.053071 | orchestrator | 2026-03-17 01:21:55.053076 | orchestrator | + echo 2026-03-17 01:21:55.053080 | orchestrator | + echo '# Status of MariaDB' 2026-03-17 01:21:55.053085 | orchestrator | + echo 2026-03-17 01:21:55.053355 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-17 01:21:55.081801 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-17 01:21:55.081873 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-17 01:21:55.081880 | orchestrator | + MARIADB_USER=root_shard_0 2026-03-17 01:21:55.081886 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-03-17 01:21:55.133645 | orchestrator | Reading package lists... 2026-03-17 01:21:55.378114 | orchestrator | Building dependency tree... 2026-03-17 01:21:55.378400 | orchestrator | Reading state information... 2026-03-17 01:21:55.722390 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-03-17 01:21:55.722485 | orchestrator | bc set to manually installed. 2026-03-17 01:21:55.722502 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2026-03-17 01:21:56.386461 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-03-17 01:21:56.386551 | orchestrator | 2026-03-17 01:21:56.386563 | orchestrator | # Status of Prometheus 2026-03-17 01:21:56.386572 | orchestrator | 2026-03-17 01:21:56.386579 | orchestrator | + echo 2026-03-17 01:21:56.386591 | orchestrator | + echo '# Status of Prometheus' 2026-03-17 01:21:56.386601 | orchestrator | + echo 2026-03-17 01:21:56.386608 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-03-17 01:21:56.452122 | orchestrator | Unauthorized 2026-03-17 01:21:56.455536 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-03-17 01:21:56.512219 | orchestrator | Unauthorized 2026-03-17 01:21:56.515499 | orchestrator | 2026-03-17 01:21:56.515558 | orchestrator | # Status of RabbitMQ 2026-03-17 01:21:56.515565 | orchestrator | 2026-03-17 01:21:56.515571 | orchestrator | + echo 2026-03-17 01:21:56.515576 | orchestrator | + echo '# Status of RabbitMQ' 2026-03-17 01:21:56.515582 | orchestrator | + echo 2026-03-17 01:21:56.516774 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-17 01:21:56.565305 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-17 01:21:56.565401 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-17 01:21:56.565421 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-03-17 01:21:57.002097 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-03-17 01:21:57.011318 | orchestrator | 2026-03-17 01:21:57.011375 | orchestrator | # Status of Redis 2026-03-17 01:21:57.011381 | orchestrator | 2026-03-17 01:21:57.011385 | orchestrator | + echo 2026-03-17 01:21:57.011390 | orchestrator | + echo '# Status of Redis' 2026-03-17 01:21:57.011395 | orchestrator | + echo 2026-03-17 01:21:57.011400 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-03-17 01:21:57.017493 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001742s;;;0.000000;10.000000 2026-03-17 01:21:57.017566 | orchestrator | 2026-03-17 01:21:57.017572 | orchestrator | + popd 2026-03-17 01:21:57.017576 | orchestrator | + echo 2026-03-17 01:21:57.017580 | orchestrator | # Create backup of MariaDB database 2026-03-17 01:21:57.017585 | orchestrator | 2026-03-17 01:21:57.017589 | orchestrator | + echo '# Create backup of MariaDB database' 2026-03-17 01:21:57.017593 | orchestrator | + echo 2026-03-17 01:21:57.017597 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-03-17 01:21:58.881682 | orchestrator | 2026-03-17 01:21:58 | INFO  | Task 63396e6a-526e-4615-b60b-a3d83d83266b (mariadb_backup) was prepared for execution. 2026-03-17 01:21:58.881808 | orchestrator | 2026-03-17 01:21:58 | INFO  | It takes a moment until task 63396e6a-526e-4615-b60b-a3d83d83266b (mariadb_backup) has been started and output is visible here. 2026-03-17 01:22:25.359740 | orchestrator | 2026-03-17 01:22:25.359926 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:22:25.359946 | orchestrator | 2026-03-17 01:22:25.359958 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:22:25.359970 | orchestrator | Tuesday 17 March 2026 01:22:02 +0000 (0:00:00.158) 0:00:00.158 ********* 2026-03-17 01:22:25.359981 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:22:25.359993 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:22:25.360004 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:22:25.360015 | orchestrator | 2026-03-17 01:22:25.360026 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:22:25.360037 | orchestrator | Tuesday 17 March 2026 01:22:02 +0000 (0:00:00.304) 0:00:00.463 ********* 2026-03-17 01:22:25.360050 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-17 01:22:25.360061 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-17 01:22:25.360072 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-17 01:22:25.360083 | orchestrator | 2026-03-17 01:22:25.360094 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-17 01:22:25.360105 | orchestrator | 2026-03-17 01:22:25.360116 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-17 01:22:25.360127 | orchestrator | Tuesday 17 March 2026 01:22:03 +0000 (0:00:00.484) 0:00:00.948 ********* 2026-03-17 01:22:25.360138 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-17 01:22:25.360149 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-17 01:22:25.360160 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-17 01:22:25.360171 | orchestrator | 2026-03-17 01:22:25.360182 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-17 01:22:25.360193 | orchestrator | Tuesday 17 March 2026 01:22:03 +0000 (0:00:00.388) 0:00:01.336 ********* 2026-03-17 01:22:25.360204 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:22:25.360215 | orchestrator | 2026-03-17 01:22:25.360234 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-03-17 01:22:25.360257 | orchestrator | Tuesday 17 March 2026 01:22:04 +0000 (0:00:00.459) 0:00:01.795 ********* 2026-03-17 01:22:25.360283 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:22:25.360301 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:22:25.360320 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:22:25.360338 | orchestrator | 2026-03-17 01:22:25.360356 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-03-17 01:22:25.360373 | orchestrator | Tuesday 17 March 2026 01:22:07 +0000 (0:00:02.864) 0:00:04.660 ********* 2026-03-17 01:22:25.360391 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-17 01:22:25.360410 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-03-17 01:22:25.360430 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-17 01:22:25.360450 | orchestrator | mariadb_bootstrap_restart 2026-03-17 01:22:25.360496 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:22:25.360565 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:22:25.360587 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:22:25.360604 | orchestrator | 2026-03-17 01:22:25.360624 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-17 01:22:25.360642 | orchestrator | skipping: no hosts matched 2026-03-17 01:22:25.360660 | orchestrator | 2026-03-17 01:22:25.360678 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-17 01:22:25.360696 | orchestrator | skipping: no hosts matched 2026-03-17 01:22:25.360709 | orchestrator | 2026-03-17 01:22:25.360720 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-17 01:22:25.360731 | orchestrator | skipping: no hosts matched 2026-03-17 01:22:25.360741 | orchestrator | 2026-03-17 01:22:25.360782 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-17 01:22:25.360903 | orchestrator | 2026-03-17 01:22:25.360917 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-17 01:22:25.360928 | orchestrator | Tuesday 17 March 2026 01:22:24 +0000 (0:00:17.456) 0:00:22.117 ********* 2026-03-17 01:22:25.360939 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:22:25.360950 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:22:25.360961 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:22:25.360972 | orchestrator | 2026-03-17 01:22:25.360983 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-17 01:22:25.360994 | orchestrator | Tuesday 17 March 2026 01:22:24 +0000 (0:00:00.270) 0:00:22.387 ********* 2026-03-17 01:22:25.361004 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:22:25.361016 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:22:25.361033 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:22:25.361051 | orchestrator | 2026-03-17 01:22:25.361068 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:22:25.361087 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 01:22:25.361109 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-17 01:22:25.361129 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-17 01:22:25.361149 | orchestrator | 2026-03-17 01:22:25.361167 | orchestrator | 2026-03-17 01:22:25.361182 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:22:25.361193 | orchestrator | Tuesday 17 March 2026 01:22:25 +0000 (0:00:00.303) 0:00:22.691 ********* 2026-03-17 01:22:25.361203 | orchestrator | =============================================================================== 2026-03-17 01:22:25.361219 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 17.46s 2026-03-17 01:22:25.361271 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 2.86s 2026-03-17 01:22:25.361294 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.48s 2026-03-17 01:22:25.361313 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.46s 2026-03-17 01:22:25.361331 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.39s 2026-03-17 01:22:25.361347 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2026-03-17 01:22:25.361359 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.30s 2026-03-17 01:22:25.361369 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.27s 2026-03-17 01:22:25.554345 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-03-17 01:22:25.564730 | orchestrator | + set -e 2026-03-17 01:22:25.564954 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-17 01:22:25.564972 | orchestrator | ++ export INTERACTIVE=false 2026-03-17 01:22:25.564981 | orchestrator | ++ INTERACTIVE=false 2026-03-17 01:22:25.565090 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-17 01:22:25.565102 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-17 01:22:25.565110 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-17 01:22:25.565131 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-17 01:22:25.567544 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-17 01:22:25.567577 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-17 01:22:25.567586 | orchestrator | + export OS_CLOUD=admin 2026-03-17 01:22:25.567595 | orchestrator | + OS_CLOUD=admin 2026-03-17 01:22:25.567609 | orchestrator | 2026-03-17 01:22:25.567618 | orchestrator | # OpenStack endpoints 2026-03-17 01:22:25.567626 | orchestrator | 2026-03-17 01:22:25.567634 | orchestrator | + echo 2026-03-17 01:22:25.567642 | orchestrator | + echo '# OpenStack endpoints' 2026-03-17 01:22:25.567650 | orchestrator | + echo 2026-03-17 01:22:25.567658 | orchestrator | + openstack endpoint list 2026-03-17 01:22:28.931584 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-17 01:22:28.931686 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-03-17 01:22:28.931694 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-17 01:22:28.931699 | orchestrator | | 04163b2b3ba44813bb6cfe52144ee6a2 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-03-17 01:22:28.931703 | orchestrator | | 06f034c62092497d98728fb68cea7f37 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-17 01:22:28.931710 | orchestrator | | 0d10819503db471698c5d6cfdfc9eaa6 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-17 01:22:28.931714 | orchestrator | | 107feef0093949b7b4f8f116855a5af1 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-17 01:22:28.931718 | orchestrator | | 1f3dd5702eaa41b995c62b11a3aa1eab | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-03-17 01:22:28.931722 | orchestrator | | 3b8b353e0cc048a783cba8cf2d2c6420 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-03-17 01:22:28.931725 | orchestrator | | 4c0728cc5fc043b294f1095e595411b0 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-03-17 01:22:28.931729 | orchestrator | | 58e13716ca794d999c190e957fe39cfa | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-03-17 01:22:28.931733 | orchestrator | | 61328f27f65f4239b5217511220ac2ca | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-03-17 01:22:28.931737 | orchestrator | | 62504500c2bd4a24b035a491b2757d44 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-03-17 01:22:28.931741 | orchestrator | | 65c35c04f07c49628a5dc5bf599e54cc | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-03-17 01:22:28.931745 | orchestrator | | 6a9a17db28d94876ad5c2ba687989e89 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-03-17 01:22:28.931763 | orchestrator | | 6bb6aabb1b5747baa779877644ef79e7 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-03-17 01:22:28.931787 | orchestrator | | 76f6b8337e304249b2b94b5296b7676c | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-03-17 01:22:28.931791 | orchestrator | | 8465c4e8360a417dbd16e82b1d8c3472 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-03-17 01:22:28.931795 | orchestrator | | 88c9cbcebe90418fa49e9f91ff766fb6 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-03-17 01:22:28.931799 | orchestrator | | 8a0c651d65fb4f46b734d72198664259 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-17 01:22:28.931803 | orchestrator | | 931ed67734444d62ab96a73ce9c4572d | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-03-17 01:22:28.931807 | orchestrator | | c407cd0f33f9433f8bfa51388cf3eb7c | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-03-17 01:22:28.931811 | orchestrator | | d6e6655e47df4f8aa49d8cfe9b51b0a4 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-03-17 01:22:28.931825 | orchestrator | | f1ff23e2bbd645a0b6e13c065dcd7b50 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-03-17 01:22:28.931829 | orchestrator | | fa2a0fd56b7541d8ab579b7c66d3d65e | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-03-17 01:22:28.931833 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-17 01:22:29.177961 | orchestrator | 2026-03-17 01:22:29.178055 | orchestrator | # Cinder 2026-03-17 01:22:29.178063 | orchestrator | 2026-03-17 01:22:29.178067 | orchestrator | + echo 2026-03-17 01:22:29.178071 | orchestrator | + echo '# Cinder' 2026-03-17 01:22:29.178075 | orchestrator | + echo 2026-03-17 01:22:29.178080 | orchestrator | + openstack volume service list 2026-03-17 01:22:32.902565 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-17 01:22:32.902696 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-03-17 01:22:32.902709 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-17 01:22:32.902718 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-17T01:22:24.000000 | 2026-03-17 01:22:32.902742 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-17T01:22:24.000000 | 2026-03-17 01:22:32.902750 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-17T01:22:25.000000 | 2026-03-17 01:22:32.902788 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-03-17T01:22:24.000000 | 2026-03-17 01:22:32.902800 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-03-17T01:22:27.000000 | 2026-03-17 01:22:32.902811 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-03-17T01:22:28.000000 | 2026-03-17 01:22:32.902823 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-03-17T01:22:24.000000 | 2026-03-17 01:22:32.902835 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-03-17T01:22:26.000000 | 2026-03-17 01:22:32.902849 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-03-17T01:22:26.000000 | 2026-03-17 01:22:32.902862 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-17 01:22:33.160901 | orchestrator | 2026-03-17 01:22:33.161007 | orchestrator | # Neutron 2026-03-17 01:22:33.161026 | orchestrator | 2026-03-17 01:22:33.161040 | orchestrator | + echo 2026-03-17 01:22:33.161053 | orchestrator | + echo '# Neutron' 2026-03-17 01:22:33.161066 | orchestrator | + echo 2026-03-17 01:22:33.161080 | orchestrator | + openstack network agent list 2026-03-17 01:22:36.035112 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-17 01:22:36.035268 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-03-17 01:22:36.035285 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-17 01:22:36.035296 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-03-17 01:22:36.035307 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-03-17 01:22:36.035317 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-03-17 01:22:36.035327 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-03-17 01:22:36.035334 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-03-17 01:22:36.035340 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-03-17 01:22:36.035347 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-17 01:22:36.035353 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-17 01:22:36.035359 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-17 01:22:36.035365 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-17 01:22:36.296522 | orchestrator | + openstack network service provider list 2026-03-17 01:22:38.797105 | orchestrator | +---------------+------+---------+ 2026-03-17 01:22:38.797245 | orchestrator | | Service Type | Name | Default | 2026-03-17 01:22:38.797263 | orchestrator | +---------------+------+---------+ 2026-03-17 01:22:38.797274 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-03-17 01:22:38.797285 | orchestrator | +---------------+------+---------+ 2026-03-17 01:22:39.059799 | orchestrator | 2026-03-17 01:22:39.059908 | orchestrator | # Nova 2026-03-17 01:22:39.059929 | orchestrator | 2026-03-17 01:22:39.059943 | orchestrator | + echo 2026-03-17 01:22:39.059957 | orchestrator | + echo '# Nova' 2026-03-17 01:22:39.059970 | orchestrator | + echo 2026-03-17 01:22:39.060006 | orchestrator | + openstack compute service list 2026-03-17 01:22:42.425920 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-17 01:22:42.425996 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-03-17 01:22:42.426006 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-17 01:22:42.426050 | orchestrator | | b65e6c14-c78a-4409-acc5-65f3df4d9c6a | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-17T01:22:32.000000 | 2026-03-17 01:22:42.426060 | orchestrator | | 8644d3f3-e965-4c51-a44c-448e98027ae7 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-17T01:22:41.000000 | 2026-03-17 01:22:42.426089 | orchestrator | | ab18c7f9-6ab6-4202-b23b-eaf6f3944f5b | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-17T01:22:32.000000 | 2026-03-17 01:22:42.426105 | orchestrator | | f0606b78-84b7-4fde-b6c9-bbec7561f24f | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-03-17T01:22:33.000000 | 2026-03-17 01:22:42.426112 | orchestrator | | 805b73a6-6e50-4937-befa-711927971eb4 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-03-17T01:22:33.000000 | 2026-03-17 01:22:42.426119 | orchestrator | | ffc2a914-7ee7-4388-9b95-fbf9b80e18fa | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-03-17T01:22:33.000000 | 2026-03-17 01:22:42.426126 | orchestrator | | e5b9a67c-3386-4a3d-b9a3-d2d1e6abb588 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-03-17T01:22:33.000000 | 2026-03-17 01:22:42.426132 | orchestrator | | bae2a420-fe70-4104-96ab-4641f7e50719 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-03-17T01:22:34.000000 | 2026-03-17 01:22:42.426138 | orchestrator | | 569ed418-a3f0-4238-83bd-7afc262e6e8d | nova-compute | testbed-node-5 | nova | enabled | up | 2026-03-17T01:22:34.000000 | 2026-03-17 01:22:42.426144 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-17 01:22:42.677295 | orchestrator | + openstack hypervisor list 2026-03-17 01:22:45.323851 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-17 01:22:45.323950 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-03-17 01:22:45.323964 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-17 01:22:45.323974 | orchestrator | | 4023a3a5-3e0f-4d9e-9373-e54fdf54020c | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-03-17 01:22:45.323983 | orchestrator | | 5bfde917-b480-4df4-a88b-510fb175f9ec | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-03-17 01:22:45.323993 | orchestrator | | 37e6404b-d421-40a2-b2f5-ecb9b1c8bd17 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-03-17 01:22:45.324002 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-17 01:22:45.611136 | orchestrator | 2026-03-17 01:22:45.611221 | orchestrator | # Run OpenStack test play 2026-03-17 01:22:45.611234 | orchestrator | 2026-03-17 01:22:45.611243 | orchestrator | + echo 2026-03-17 01:22:45.611253 | orchestrator | + echo '# Run OpenStack test play' 2026-03-17 01:22:45.611263 | orchestrator | + echo 2026-03-17 01:22:45.611272 | orchestrator | + osism apply --environment openstack test 2026-03-17 01:22:47.608931 | orchestrator | 2026-03-17 01:22:47 | INFO  | Trying to run play test in environment openstack 2026-03-17 01:22:47.673718 | orchestrator | 2026-03-17 01:22:47 | INFO  | Task 79d9bc27-a8bf-4881-b677-6b26cbea0df7 (test) was prepared for execution. 2026-03-17 01:22:47.673841 | orchestrator | 2026-03-17 01:22:47 | INFO  | It takes a moment until task 79d9bc27-a8bf-4881-b677-6b26cbea0df7 (test) has been started and output is visible here. 2026-03-17 01:25:33.079579 | orchestrator | 2026-03-17 01:25:33.079664 | orchestrator | PLAY [Create test project] ***************************************************** 2026-03-17 01:25:33.079675 | orchestrator | 2026-03-17 01:25:33.079683 | orchestrator | TASK [Create test domain] ****************************************************** 2026-03-17 01:25:33.079690 | orchestrator | Tuesday 17 March 2026 01:22:51 +0000 (0:00:00.071) 0:00:00.071 ********* 2026-03-17 01:25:33.079697 | orchestrator | changed: [localhost] 2026-03-17 01:25:33.079704 | orchestrator | 2026-03-17 01:25:33.079711 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-03-17 01:25:33.079717 | orchestrator | Tuesday 17 March 2026 01:22:55 +0000 (0:00:03.636) 0:00:03.707 ********* 2026-03-17 01:25:33.079724 | orchestrator | changed: [localhost] 2026-03-17 01:25:33.079731 | orchestrator | 2026-03-17 01:25:33.079738 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-03-17 01:25:33.079745 | orchestrator | Tuesday 17 March 2026 01:22:59 +0000 (0:00:04.105) 0:00:07.812 ********* 2026-03-17 01:25:33.079773 | orchestrator | changed: [localhost] 2026-03-17 01:25:33.079781 | orchestrator | 2026-03-17 01:25:33.079787 | orchestrator | TASK [Create test project] ***************************************************** 2026-03-17 01:25:33.079793 | orchestrator | Tuesday 17 March 2026 01:23:05 +0000 (0:00:06.356) 0:00:14.168 ********* 2026-03-17 01:25:33.079800 | orchestrator | changed: [localhost] 2026-03-17 01:25:33.079806 | orchestrator | 2026-03-17 01:25:33.079813 | orchestrator | TASK [Create test user] ******************************************************** 2026-03-17 01:25:33.079819 | orchestrator | Tuesday 17 March 2026 01:23:09 +0000 (0:00:04.022) 0:00:18.190 ********* 2026-03-17 01:25:33.079824 | orchestrator | changed: [localhost] 2026-03-17 01:25:33.079830 | orchestrator | 2026-03-17 01:25:33.079836 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-03-17 01:25:33.079842 | orchestrator | Tuesday 17 March 2026 01:23:14 +0000 (0:00:04.060) 0:00:22.250 ********* 2026-03-17 01:25:33.079850 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-03-17 01:25:33.079857 | orchestrator | changed: [localhost] => (item=member) 2026-03-17 01:25:33.079865 | orchestrator | changed: [localhost] => (item=creator) 2026-03-17 01:25:33.079871 | orchestrator | 2026-03-17 01:25:33.079877 | orchestrator | TASK [Create test server group] ************************************************ 2026-03-17 01:25:33.079884 | orchestrator | Tuesday 17 March 2026 01:23:25 +0000 (0:00:11.515) 0:00:33.766 ********* 2026-03-17 01:25:33.079890 | orchestrator | changed: [localhost] 2026-03-17 01:25:33.079898 | orchestrator | 2026-03-17 01:25:33.079904 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-03-17 01:25:33.079910 | orchestrator | Tuesday 17 March 2026 01:23:29 +0000 (0:00:04.192) 0:00:37.959 ********* 2026-03-17 01:25:33.079916 | orchestrator | changed: [localhost] 2026-03-17 01:25:33.079923 | orchestrator | 2026-03-17 01:25:33.079930 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-03-17 01:25:33.079950 | orchestrator | Tuesday 17 March 2026 01:23:34 +0000 (0:00:04.767) 0:00:42.726 ********* 2026-03-17 01:25:33.079957 | orchestrator | changed: [localhost] 2026-03-17 01:25:33.079962 | orchestrator | 2026-03-17 01:25:33.079969 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-03-17 01:25:33.079975 | orchestrator | Tuesday 17 March 2026 01:23:38 +0000 (0:00:04.325) 0:00:47.052 ********* 2026-03-17 01:25:33.079982 | orchestrator | changed: [localhost] 2026-03-17 01:25:33.079989 | orchestrator | 2026-03-17 01:25:33.079995 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-03-17 01:25:33.080057 | orchestrator | Tuesday 17 March 2026 01:23:42 +0000 (0:00:04.038) 0:00:51.091 ********* 2026-03-17 01:25:33.080064 | orchestrator | changed: [localhost] 2026-03-17 01:25:33.080069 | orchestrator | 2026-03-17 01:25:33.080075 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-03-17 01:25:33.080081 | orchestrator | Tuesday 17 March 2026 01:23:46 +0000 (0:00:04.124) 0:00:55.215 ********* 2026-03-17 01:25:33.080088 | orchestrator | changed: [localhost] 2026-03-17 01:25:33.080094 | orchestrator | 2026-03-17 01:25:33.080101 | orchestrator | TASK [Create test network] ***************************************************** 2026-03-17 01:25:33.080107 | orchestrator | Tuesday 17 March 2026 01:23:50 +0000 (0:00:03.786) 0:00:59.001 ********* 2026-03-17 01:25:33.080114 | orchestrator | changed: [localhost] 2026-03-17 01:25:33.080120 | orchestrator | 2026-03-17 01:25:33.080126 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-03-17 01:25:33.080132 | orchestrator | Tuesday 17 March 2026 01:23:55 +0000 (0:00:04.898) 0:01:03.900 ********* 2026-03-17 01:25:33.080139 | orchestrator | changed: [localhost] 2026-03-17 01:25:33.080146 | orchestrator | 2026-03-17 01:25:33.080152 | orchestrator | TASK [Create test router] ****************************************************** 2026-03-17 01:25:33.080159 | orchestrator | Tuesday 17 March 2026 01:24:00 +0000 (0:00:05.183) 0:01:09.084 ********* 2026-03-17 01:25:33.080166 | orchestrator | changed: [localhost] 2026-03-17 01:25:33.080171 | orchestrator | 2026-03-17 01:25:33.080175 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-03-17 01:25:33.080187 | orchestrator | 2026-03-17 01:25:33.080192 | orchestrator | TASK [Get test server group] *************************************************** 2026-03-17 01:25:33.080197 | orchestrator | Tuesday 17 March 2026 01:24:12 +0000 (0:00:11.814) 0:01:20.898 ********* 2026-03-17 01:25:33.080202 | orchestrator | ok: [localhost] 2026-03-17 01:25:33.080209 | orchestrator | 2026-03-17 01:25:33.080214 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-03-17 01:25:33.080219 | orchestrator | Tuesday 17 March 2026 01:24:16 +0000 (0:00:03.648) 0:01:24.547 ********* 2026-03-17 01:25:33.080224 | orchestrator | skipping: [localhost] 2026-03-17 01:25:33.080229 | orchestrator | 2026-03-17 01:25:33.080234 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-03-17 01:25:33.080238 | orchestrator | Tuesday 17 March 2026 01:24:16 +0000 (0:00:00.052) 0:01:24.599 ********* 2026-03-17 01:25:33.080243 | orchestrator | skipping: [localhost] 2026-03-17 01:25:33.080247 | orchestrator | 2026-03-17 01:25:33.080252 | orchestrator | TASK [Delete test instances] *************************************************** 2026-03-17 01:25:33.080257 | orchestrator | Tuesday 17 March 2026 01:24:16 +0000 (0:00:00.052) 0:01:24.651 ********* 2026-03-17 01:25:33.080262 | orchestrator | skipping: [localhost] => (item=test-4)  2026-03-17 01:25:33.080267 | orchestrator | skipping: [localhost] => (item=test-3)  2026-03-17 01:25:33.080285 | orchestrator | skipping: [localhost] => (item=test-2)  2026-03-17 01:25:33.080290 | orchestrator | skipping: [localhost] => (item=test-1)  2026-03-17 01:25:33.080295 | orchestrator | skipping: [localhost] => (item=test)  2026-03-17 01:25:33.080300 | orchestrator | skipping: [localhost] 2026-03-17 01:25:33.080305 | orchestrator | 2026-03-17 01:25:33.080309 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-03-17 01:25:33.080314 | orchestrator | Tuesday 17 March 2026 01:24:16 +0000 (0:00:00.174) 0:01:24.826 ********* 2026-03-17 01:25:33.080319 | orchestrator | skipping: [localhost] 2026-03-17 01:25:33.080323 | orchestrator | 2026-03-17 01:25:33.080328 | orchestrator | TASK [Create test instances] *************************************************** 2026-03-17 01:25:33.080333 | orchestrator | Tuesday 17 March 2026 01:24:16 +0000 (0:00:00.169) 0:01:24.995 ********* 2026-03-17 01:25:33.080338 | orchestrator | changed: [localhost] => (item=test) 2026-03-17 01:25:33.080342 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-17 01:25:33.080347 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-17 01:25:33.080352 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-17 01:25:33.080356 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-17 01:25:33.080361 | orchestrator | 2026-03-17 01:25:33.080365 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-03-17 01:25:33.080370 | orchestrator | Tuesday 17 March 2026 01:24:21 +0000 (0:00:04.935) 0:01:29.931 ********* 2026-03-17 01:25:33.080375 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-03-17 01:25:33.080381 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-03-17 01:25:33.080386 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-03-17 01:25:33.080391 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-03-17 01:25:33.080395 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (56 retries left). 2026-03-17 01:25:33.080402 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j704284755224.2563', 'results_file': '/ansible/.ansible_async/j704284755224.2563', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-17 01:25:33.080410 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j984009454046.2588', 'results_file': '/ansible/.ansible_async/j984009454046.2588', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-17 01:25:33.080419 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j406338174493.2613', 'results_file': '/ansible/.ansible_async/j406338174493.2613', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-17 01:25:33.080424 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j890108203779.2638', 'results_file': '/ansible/.ansible_async/j890108203779.2638', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-17 01:25:33.080429 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j22698933892.2663', 'results_file': '/ansible/.ansible_async/j22698933892.2663', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-17 01:25:33.080433 | orchestrator | 2026-03-17 01:25:33.080438 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-03-17 01:25:33.080443 | orchestrator | Tuesday 17 March 2026 01:25:19 +0000 (0:00:57.539) 0:02:27.470 ********* 2026-03-17 01:25:33.080499 | orchestrator | changed: [localhost] => (item=test) 2026-03-17 01:25:33.080504 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-17 01:25:33.080509 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-17 01:25:33.080513 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-17 01:25:33.080518 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-17 01:25:33.080522 | orchestrator | 2026-03-17 01:25:33.080527 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-03-17 01:25:33.080532 | orchestrator | Tuesday 17 March 2026 01:25:23 +0000 (0:00:04.412) 0:02:31.882 ********* 2026-03-17 01:25:33.080537 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-03-17 01:25:33.080549 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j104088470331.2774', 'results_file': '/ansible/.ansible_async/j104088470331.2774', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-17 01:25:33.080553 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j866210389789.2799', 'results_file': '/ansible/.ansible_async/j866210389789.2799', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-17 01:25:33.080557 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j121660934295.2824', 'results_file': '/ansible/.ansible_async/j121660934295.2824', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-17 01:25:33.080566 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j862058518934.2849', 'results_file': '/ansible/.ansible_async/j862058518934.2849', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-17 01:26:12.663631 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j515706537404.2874', 'results_file': '/ansible/.ansible_async/j515706537404.2874', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-17 01:26:12.663714 | orchestrator | 2026-03-17 01:26:12.663725 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-03-17 01:26:12.663733 | orchestrator | Tuesday 17 March 2026 01:25:33 +0000 (0:00:09.430) 0:02:41.312 ********* 2026-03-17 01:26:12.663739 | orchestrator | changed: [localhost] => (item=test) 2026-03-17 01:26:12.663747 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-17 01:26:12.663753 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-17 01:26:12.663759 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-17 01:26:12.663765 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-17 01:26:12.663771 | orchestrator | 2026-03-17 01:26:12.663777 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-03-17 01:26:12.663783 | orchestrator | Tuesday 17 March 2026 01:25:37 +0000 (0:00:04.229) 0:02:45.541 ********* 2026-03-17 01:26:12.663806 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-03-17 01:26:12.663814 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j411068820310.2950', 'results_file': '/ansible/.ansible_async/j411068820310.2950', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-17 01:26:12.663820 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j207090330687.2975', 'results_file': '/ansible/.ansible_async/j207090330687.2975', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-17 01:26:12.663826 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j400117825529.3001', 'results_file': '/ansible/.ansible_async/j400117825529.3001', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-17 01:26:12.663843 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j193150937693.3027', 'results_file': '/ansible/.ansible_async/j193150937693.3027', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-17 01:26:12.663850 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j833926879161.3053', 'results_file': '/ansible/.ansible_async/j833926879161.3053', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-17 01:26:12.663855 | orchestrator | 2026-03-17 01:26:12.663861 | orchestrator | TASK [Create test volume] ****************************************************** 2026-03-17 01:26:12.663867 | orchestrator | Tuesday 17 March 2026 01:25:47 +0000 (0:00:10.217) 0:02:55.759 ********* 2026-03-17 01:26:12.663873 | orchestrator | changed: [localhost] 2026-03-17 01:26:12.663879 | orchestrator | 2026-03-17 01:26:12.663885 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-03-17 01:26:12.663891 | orchestrator | Tuesday 17 March 2026 01:25:53 +0000 (0:00:06.300) 0:03:02.059 ********* 2026-03-17 01:26:12.663896 | orchestrator | changed: [localhost] 2026-03-17 01:26:12.663902 | orchestrator | 2026-03-17 01:26:12.663908 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-03-17 01:26:12.663914 | orchestrator | Tuesday 17 March 2026 01:26:07 +0000 (0:00:13.345) 0:03:15.405 ********* 2026-03-17 01:26:12.663920 | orchestrator | ok: [localhost] 2026-03-17 01:26:12.663926 | orchestrator | 2026-03-17 01:26:12.663932 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-03-17 01:26:12.663941 | orchestrator | Tuesday 17 March 2026 01:26:12 +0000 (0:00:05.188) 0:03:20.593 ********* 2026-03-17 01:26:12.663950 | orchestrator | ok: [localhost] => { 2026-03-17 01:26:12.663959 | orchestrator |  "msg": "192.168.112.171" 2026-03-17 01:26:12.663968 | orchestrator | } 2026-03-17 01:26:12.663978 | orchestrator | 2026-03-17 01:26:12.663987 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:26:12.663997 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-17 01:26:12.664007 | orchestrator | 2026-03-17 01:26:12.664016 | orchestrator | 2026-03-17 01:26:12.664025 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:26:12.664089 | orchestrator | Tuesday 17 March 2026 01:26:12 +0000 (0:00:00.043) 0:03:20.637 ********* 2026-03-17 01:26:12.664101 | orchestrator | =============================================================================== 2026-03-17 01:26:12.664110 | orchestrator | Wait for instance creation to complete --------------------------------- 57.54s 2026-03-17 01:26:12.664119 | orchestrator | Attach test volume ----------------------------------------------------- 13.35s 2026-03-17 01:26:12.664127 | orchestrator | Create test router ----------------------------------------------------- 11.81s 2026-03-17 01:26:12.664133 | orchestrator | Add member roles to user test ------------------------------------------ 11.52s 2026-03-17 01:26:12.664146 | orchestrator | Wait for tags to be added ---------------------------------------------- 10.22s 2026-03-17 01:26:12.664152 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.43s 2026-03-17 01:26:12.664158 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.36s 2026-03-17 01:26:12.664175 | orchestrator | Create test volume ------------------------------------------------------ 6.30s 2026-03-17 01:26:12.664181 | orchestrator | Create floating ip address ---------------------------------------------- 5.19s 2026-03-17 01:26:12.664187 | orchestrator | Create test subnet ------------------------------------------------------ 5.18s 2026-03-17 01:26:12.664192 | orchestrator | Create test instances --------------------------------------------------- 4.94s 2026-03-17 01:26:12.664198 | orchestrator | Create test network ----------------------------------------------------- 4.90s 2026-03-17 01:26:12.664204 | orchestrator | Create ssh security group ----------------------------------------------- 4.77s 2026-03-17 01:26:12.664211 | orchestrator | Add metadata to instances ----------------------------------------------- 4.41s 2026-03-17 01:26:12.664218 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.33s 2026-03-17 01:26:12.664224 | orchestrator | Add tag to instances ---------------------------------------------------- 4.23s 2026-03-17 01:26:12.664231 | orchestrator | Create test server group ------------------------------------------------ 4.19s 2026-03-17 01:26:12.664238 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.12s 2026-03-17 01:26:12.664244 | orchestrator | Create test-admin user -------------------------------------------------- 4.11s 2026-03-17 01:26:12.664251 | orchestrator | Create test user -------------------------------------------------------- 4.06s 2026-03-17 01:26:12.997684 | orchestrator | + server_list 2026-03-17 01:26:12.997764 | orchestrator | + openstack --os-cloud test server list 2026-03-17 01:26:16.562142 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-17 01:26:16.562245 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-03-17 01:26:16.562268 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-17 01:26:16.562285 | orchestrator | | ec0b7f28-5c09-4900-af01-90bbcf860a55 | test-3 | ACTIVE | test=192.168.112.108, 192.168.200.3 | N/A (booted from volume) | SCS-1L-1 | 2026-03-17 01:26:16.562336 | orchestrator | | 2bf25672-ccdb-4162-89bd-0cf4ed0d2434 | test-4 | ACTIVE | test=192.168.112.102, 192.168.200.148 | N/A (booted from volume) | SCS-1L-1 | 2026-03-17 01:26:16.562362 | orchestrator | | b72a09fd-c60a-4ab7-af58-a511bf8015d3 | test-2 | ACTIVE | test=192.168.112.162, 192.168.200.171 | N/A (booted from volume) | SCS-1L-1 | 2026-03-17 01:26:16.562380 | orchestrator | | 8c6c80a4-3f43-4bcb-8f60-47a24f09584f | test | ACTIVE | test=192.168.112.171, 192.168.200.155 | N/A (booted from volume) | SCS-1L-1 | 2026-03-17 01:26:16.562397 | orchestrator | | a4162a1d-3301-49b8-a77b-d15d8dda7107 | test-1 | ACTIVE | test=192.168.112.104, 192.168.200.25 | N/A (booted from volume) | SCS-1L-1 | 2026-03-17 01:26:16.562415 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-17 01:26:16.849530 | orchestrator | + openstack --os-cloud test server show test 2026-03-17 01:26:20.102002 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-17 01:26:20.102240 | orchestrator | | Field | Value | 2026-03-17 01:26:20.102291 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-17 01:26:20.102310 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-17 01:26:20.102329 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-17 01:26:20.102346 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-17 01:26:20.102363 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-03-17 01:26:20.102380 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-17 01:26:20.102404 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-17 01:26:20.102443 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-17 01:26:20.102461 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-17 01:26:20.102485 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-17 01:26:20.102495 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-17 01:26:20.102506 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-17 01:26:20.102519 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-17 01:26:20.102531 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-17 01:26:20.102542 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-17 01:26:20.102553 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-17 01:26:20.102570 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-17T01:24:55.000000 | 2026-03-17 01:26:20.102589 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-17 01:26:20.102612 | orchestrator | | accessIPv4 | | 2026-03-17 01:26:20.102624 | orchestrator | | accessIPv6 | | 2026-03-17 01:26:20.102636 | orchestrator | | addresses | test=192.168.112.171, 192.168.200.155 | 2026-03-17 01:26:20.102648 | orchestrator | | config_drive | | 2026-03-17 01:26:20.102659 | orchestrator | | created | 2026-03-17T01:24:26Z | 2026-03-17 01:26:20.102670 | orchestrator | | description | None | 2026-03-17 01:26:20.102682 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-17 01:26:20.102694 | orchestrator | | hostId | 9367a6e0cfdf01afc26ed7e111b7f2f43bce98d6400bc2bbb97844df | 2026-03-17 01:26:20.102705 | orchestrator | | host_status | None | 2026-03-17 01:26:20.102724 | orchestrator | | id | 8c6c80a4-3f43-4bcb-8f60-47a24f09584f | 2026-03-17 01:26:20.102742 | orchestrator | | image | N/A (booted from volume) | 2026-03-17 01:26:20.102754 | orchestrator | | key_name | test | 2026-03-17 01:26:20.102772 | orchestrator | | locked | False | 2026-03-17 01:26:20.102805 | orchestrator | | locked_reason | None | 2026-03-17 01:26:20.102826 | orchestrator | | name | test | 2026-03-17 01:26:20.102843 | orchestrator | | pinned_availability_zone | None | 2026-03-17 01:26:20.102860 | orchestrator | | progress | 0 | 2026-03-17 01:26:20.102877 | orchestrator | | project_id | 8a5a5e0aae4d413a9e547f380ef9b28c | 2026-03-17 01:26:20.102900 | orchestrator | | properties | hostname='test' | 2026-03-17 01:26:20.102939 | orchestrator | | security_groups | name='ssh' | 2026-03-17 01:26:20.102956 | orchestrator | | | name='icmp' | 2026-03-17 01:26:20.102966 | orchestrator | | server_groups | None | 2026-03-17 01:26:20.102976 | orchestrator | | status | ACTIVE | 2026-03-17 01:26:20.102986 | orchestrator | | tags | test | 2026-03-17 01:26:20.102996 | orchestrator | | trusted_image_certificates | None | 2026-03-17 01:26:20.103006 | orchestrator | | updated | 2026-03-17T01:25:25Z | 2026-03-17 01:26:20.103016 | orchestrator | | user_id | 0e5d6ebf68f14e21a40c6e3866cc8726 | 2026-03-17 01:26:20.103030 | orchestrator | | volumes_attached | delete_on_termination='True', id='d1371d98-dd47-4280-bb92-3b9d5503fed6' | 2026-03-17 01:26:20.103086 | orchestrator | | | delete_on_termination='False', id='8b1e7405-dc4f-4782-afbc-bb36a82de5ba' | 2026-03-17 01:26:20.106810 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-17 01:26:20.390676 | orchestrator | + openstack --os-cloud test server show test-1 2026-03-17 01:26:23.654010 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-17 01:26:23.654216 | orchestrator | | Field | Value | 2026-03-17 01:26:23.654231 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-17 01:26:23.654244 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-17 01:26:23.654255 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-17 01:26:23.654266 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-17 01:26:23.654278 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-03-17 01:26:23.654362 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-17 01:26:23.654377 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-17 01:26:23.654407 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-17 01:26:23.654420 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-17 01:26:23.654431 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-17 01:26:23.654443 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-17 01:26:23.654454 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-17 01:26:23.654465 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-17 01:26:23.654477 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-17 01:26:23.654488 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-17 01:26:23.654513 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-17 01:26:23.654527 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-17T01:24:55.000000 | 2026-03-17 01:26:23.654548 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-17 01:26:23.654562 | orchestrator | | accessIPv4 | | 2026-03-17 01:26:23.654575 | orchestrator | | accessIPv6 | | 2026-03-17 01:26:23.654588 | orchestrator | | addresses | test=192.168.112.104, 192.168.200.25 | 2026-03-17 01:26:23.654602 | orchestrator | | config_drive | | 2026-03-17 01:26:23.654615 | orchestrator | | created | 2026-03-17T01:24:26Z | 2026-03-17 01:26:23.654629 | orchestrator | | description | None | 2026-03-17 01:26:23.654673 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-17 01:26:23.654691 | orchestrator | | hostId | 9367a6e0cfdf01afc26ed7e111b7f2f43bce98d6400bc2bbb97844df | 2026-03-17 01:26:23.654705 | orchestrator | | host_status | None | 2026-03-17 01:26:23.654724 | orchestrator | | id | a4162a1d-3301-49b8-a77b-d15d8dda7107 | 2026-03-17 01:26:23.654736 | orchestrator | | image | N/A (booted from volume) | 2026-03-17 01:26:23.654748 | orchestrator | | key_name | test | 2026-03-17 01:26:23.654759 | orchestrator | | locked | False | 2026-03-17 01:26:23.654770 | orchestrator | | locked_reason | None | 2026-03-17 01:26:23.654781 | orchestrator | | name | test-1 | 2026-03-17 01:26:23.654800 | orchestrator | | pinned_availability_zone | None | 2026-03-17 01:26:23.654811 | orchestrator | | progress | 0 | 2026-03-17 01:26:23.654823 | orchestrator | | project_id | 8a5a5e0aae4d413a9e547f380ef9b28c | 2026-03-17 01:26:23.654834 | orchestrator | | properties | hostname='test-1' | 2026-03-17 01:26:23.654853 | orchestrator | | security_groups | name='ssh' | 2026-03-17 01:26:23.654872 | orchestrator | | | name='icmp' | 2026-03-17 01:26:23.654883 | orchestrator | | server_groups | None | 2026-03-17 01:26:23.654895 | orchestrator | | status | ACTIVE | 2026-03-17 01:26:23.654906 | orchestrator | | tags | test | 2026-03-17 01:26:23.654924 | orchestrator | | trusted_image_certificates | None | 2026-03-17 01:26:23.654936 | orchestrator | | updated | 2026-03-17T01:25:25Z | 2026-03-17 01:26:23.654947 | orchestrator | | user_id | 0e5d6ebf68f14e21a40c6e3866cc8726 | 2026-03-17 01:26:23.654964 | orchestrator | | volumes_attached | delete_on_termination='True', id='07fc120f-72dc-44f4-ac09-bb2d922bf018' | 2026-03-17 01:26:23.664570 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-17 01:26:23.913822 | orchestrator | + openstack --os-cloud test server show test-2 2026-03-17 01:26:27.000399 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-17 01:26:27.000490 | orchestrator | | Field | Value | 2026-03-17 01:26:27.000503 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-17 01:26:27.000512 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-17 01:26:27.000542 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-17 01:26:27.000551 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-17 01:26:27.000559 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-03-17 01:26:27.000567 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-17 01:26:27.000587 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-17 01:26:27.000611 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-17 01:26:27.000618 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-17 01:26:27.000626 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-17 01:26:27.000633 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-17 01:26:27.000640 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-17 01:26:27.000654 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-17 01:26:27.000661 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-17 01:26:27.000669 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-17 01:26:27.000681 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-17 01:26:27.000688 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-17T01:24:55.000000 | 2026-03-17 01:26:27.000701 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-17 01:26:27.000709 | orchestrator | | accessIPv4 | | 2026-03-17 01:26:27.000717 | orchestrator | | accessIPv6 | | 2026-03-17 01:26:27.000726 | orchestrator | | addresses | test=192.168.112.162, 192.168.200.171 | 2026-03-17 01:26:27.000739 | orchestrator | | config_drive | | 2026-03-17 01:26:27.000747 | orchestrator | | created | 2026-03-17T01:24:29Z | 2026-03-17 01:26:27.000754 | orchestrator | | description | None | 2026-03-17 01:26:27.000762 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-17 01:26:27.000774 | orchestrator | | hostId | 7adcd503ff888770c59dd96ce22749b718a679671dfb0a4dbd2d8239 | 2026-03-17 01:26:27.000782 | orchestrator | | host_status | None | 2026-03-17 01:26:27.000796 | orchestrator | | id | b72a09fd-c60a-4ab7-af58-a511bf8015d3 | 2026-03-17 01:26:27.000804 | orchestrator | | image | N/A (booted from volume) | 2026-03-17 01:26:27.000812 | orchestrator | | key_name | test | 2026-03-17 01:26:27.000825 | orchestrator | | locked | False | 2026-03-17 01:26:27.000833 | orchestrator | | locked_reason | None | 2026-03-17 01:26:27.000840 | orchestrator | | name | test-2 | 2026-03-17 01:26:27.000848 | orchestrator | | pinned_availability_zone | None | 2026-03-17 01:26:27.000856 | orchestrator | | progress | 0 | 2026-03-17 01:26:27.000867 | orchestrator | | project_id | 8a5a5e0aae4d413a9e547f380ef9b28c | 2026-03-17 01:26:27.000874 | orchestrator | | properties | hostname='test-2' | 2026-03-17 01:26:27.000888 | orchestrator | | security_groups | name='ssh' | 2026-03-17 01:26:27.000896 | orchestrator | | | name='icmp' | 2026-03-17 01:26:27.000909 | orchestrator | | server_groups | None | 2026-03-17 01:26:27.000917 | orchestrator | | status | ACTIVE | 2026-03-17 01:26:27.000924 | orchestrator | | tags | test | 2026-03-17 01:26:27.000932 | orchestrator | | trusted_image_certificates | None | 2026-03-17 01:26:27.000940 | orchestrator | | updated | 2026-03-17T01:25:26Z | 2026-03-17 01:26:27.000948 | orchestrator | | user_id | 0e5d6ebf68f14e21a40c6e3866cc8726 | 2026-03-17 01:26:27.000955 | orchestrator | | volumes_attached | delete_on_termination='True', id='245e4dc7-7ca5-4d24-952f-4984aa461207' | 2026-03-17 01:26:27.004828 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-17 01:26:27.285668 | orchestrator | + openstack --os-cloud test server show test-3 2026-03-17 01:26:30.162356 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-17 01:26:30.162499 | orchestrator | | Field | Value | 2026-03-17 01:26:30.162521 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-17 01:26:30.162533 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-17 01:26:30.162547 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-17 01:26:30.162566 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-17 01:26:30.162595 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-03-17 01:26:30.162611 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-17 01:26:30.162650 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-17 01:26:30.162688 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-17 01:26:30.162705 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-17 01:26:30.162734 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-17 01:26:30.162750 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-17 01:26:30.162766 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-17 01:26:30.162782 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-17 01:26:30.162799 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-17 01:26:30.162815 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-17 01:26:30.162831 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-17 01:26:30.162853 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-17T01:24:57.000000 | 2026-03-17 01:26:30.162883 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-17 01:26:30.162911 | orchestrator | | accessIPv4 | | 2026-03-17 01:26:30.162930 | orchestrator | | accessIPv6 | | 2026-03-17 01:26:30.162947 | orchestrator | | addresses | test=192.168.112.108, 192.168.200.3 | 2026-03-17 01:26:30.162964 | orchestrator | | config_drive | | 2026-03-17 01:26:30.162982 | orchestrator | | created | 2026-03-17T01:24:30Z | 2026-03-17 01:26:30.162998 | orchestrator | | description | None | 2026-03-17 01:26:30.163015 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-17 01:26:30.163025 | orchestrator | | hostId | 41ab2e0556d80e9dab8364c285ea45d135997da834ebd5eead550163 | 2026-03-17 01:26:30.163040 | orchestrator | | host_status | None | 2026-03-17 01:26:30.163108 | orchestrator | | id | ec0b7f28-5c09-4900-af01-90bbcf860a55 | 2026-03-17 01:26:30.163127 | orchestrator | | image | N/A (booted from volume) | 2026-03-17 01:26:30.163145 | orchestrator | | key_name | test | 2026-03-17 01:26:30.163162 | orchestrator | | locked | False | 2026-03-17 01:26:30.163180 | orchestrator | | locked_reason | None | 2026-03-17 01:26:30.163197 | orchestrator | | name | test-3 | 2026-03-17 01:26:30.163214 | orchestrator | | pinned_availability_zone | None | 2026-03-17 01:26:30.163231 | orchestrator | | progress | 0 | 2026-03-17 01:26:30.163242 | orchestrator | | project_id | 8a5a5e0aae4d413a9e547f380ef9b28c | 2026-03-17 01:26:30.163265 | orchestrator | | properties | hostname='test-3' | 2026-03-17 01:26:30.163284 | orchestrator | | security_groups | name='ssh' | 2026-03-17 01:26:30.163294 | orchestrator | | | name='icmp' | 2026-03-17 01:26:30.163304 | orchestrator | | server_groups | None | 2026-03-17 01:26:30.163314 | orchestrator | | status | ACTIVE | 2026-03-17 01:26:30.163324 | orchestrator | | tags | test | 2026-03-17 01:26:30.163334 | orchestrator | | trusted_image_certificates | None | 2026-03-17 01:26:30.163343 | orchestrator | | updated | 2026-03-17T01:25:27Z | 2026-03-17 01:26:30.163353 | orchestrator | | user_id | 0e5d6ebf68f14e21a40c6e3866cc8726 | 2026-03-17 01:26:30.163367 | orchestrator | | volumes_attached | delete_on_termination='True', id='08efbb98-fa1b-4599-abbe-aae165bbcdae' | 2026-03-17 01:26:30.166991 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-17 01:26:30.476349 | orchestrator | + openstack --os-cloud test server show test-4 2026-03-17 01:26:33.392082 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-17 01:26:33.392150 | orchestrator | | Field | Value | 2026-03-17 01:26:33.392158 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-17 01:26:33.392163 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-17 01:26:33.392168 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-17 01:26:33.392173 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-17 01:26:33.392177 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-03-17 01:26:33.392182 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-17 01:26:33.392211 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-17 01:26:33.392226 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-17 01:26:33.392231 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-17 01:26:33.392235 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-17 01:26:33.392239 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-17 01:26:33.392243 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-17 01:26:33.392247 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-17 01:26:33.392251 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-17 01:26:33.392255 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-17 01:26:33.392262 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-17 01:26:33.392269 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-17T01:24:57.000000 | 2026-03-17 01:26:33.392277 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-17 01:26:33.392282 | orchestrator | | accessIPv4 | | 2026-03-17 01:26:33.392286 | orchestrator | | accessIPv6 | | 2026-03-17 01:26:33.392290 | orchestrator | | addresses | test=192.168.112.102, 192.168.200.148 | 2026-03-17 01:26:33.392294 | orchestrator | | config_drive | | 2026-03-17 01:26:33.392298 | orchestrator | | created | 2026-03-17T01:24:29Z | 2026-03-17 01:26:33.392302 | orchestrator | | description | None | 2026-03-17 01:26:33.392309 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-17 01:26:33.392313 | orchestrator | | hostId | 7adcd503ff888770c59dd96ce22749b718a679671dfb0a4dbd2d8239 | 2026-03-17 01:26:33.392318 | orchestrator | | host_status | None | 2026-03-17 01:26:33.392606 | orchestrator | | id | 2bf25672-ccdb-4162-89bd-0cf4ed0d2434 | 2026-03-17 01:26:33.392621 | orchestrator | | image | N/A (booted from volume) | 2026-03-17 01:26:33.392626 | orchestrator | | key_name | test | 2026-03-17 01:26:33.392630 | orchestrator | | locked | False | 2026-03-17 01:26:33.392634 | orchestrator | | locked_reason | None | 2026-03-17 01:26:33.392638 | orchestrator | | name | test-4 | 2026-03-17 01:26:33.392650 | orchestrator | | pinned_availability_zone | None | 2026-03-17 01:26:33.392654 | orchestrator | | progress | 0 | 2026-03-17 01:26:33.392658 | orchestrator | | project_id | 8a5a5e0aae4d413a9e547f380ef9b28c | 2026-03-17 01:26:33.392662 | orchestrator | | properties | hostname='test-4' | 2026-03-17 01:26:33.392672 | orchestrator | | security_groups | name='ssh' | 2026-03-17 01:26:33.392677 | orchestrator | | | name='icmp' | 2026-03-17 01:26:33.392681 | orchestrator | | server_groups | None | 2026-03-17 01:26:33.392685 | orchestrator | | status | ACTIVE | 2026-03-17 01:26:33.392689 | orchestrator | | tags | test | 2026-03-17 01:26:33.392693 | orchestrator | | trusted_image_certificates | None | 2026-03-17 01:26:33.392704 | orchestrator | | updated | 2026-03-17T01:25:28Z | 2026-03-17 01:26:33.392708 | orchestrator | | user_id | 0e5d6ebf68f14e21a40c6e3866cc8726 | 2026-03-17 01:26:33.392712 | orchestrator | | volumes_attached | delete_on_termination='True', id='9fe2260e-a721-46e0-9d09-f0c436336c2b' | 2026-03-17 01:26:33.396523 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-17 01:26:33.676912 | orchestrator | + server_ping 2026-03-17 01:26:33.678456 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-17 01:26:33.678828 | orchestrator | ++ tr -d '\r' 2026-03-17 01:26:36.535505 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-17 01:26:36.535595 | orchestrator | + ping -c3 192.168.112.108 2026-03-17 01:26:36.548915 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2026-03-17 01:26:36.548985 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=7.13 ms 2026-03-17 01:26:37.545977 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=2.76 ms 2026-03-17 01:26:38.548133 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=2.57 ms 2026-03-17 01:26:38.548196 | orchestrator | 2026-03-17 01:26:38.548203 | orchestrator | --- 192.168.112.108 ping statistics --- 2026-03-17 01:26:38.548209 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-17 01:26:38.548214 | orchestrator | rtt min/avg/max/mdev = 2.572/4.152/7.126/2.104 ms 2026-03-17 01:26:38.548220 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-17 01:26:38.548225 | orchestrator | + ping -c3 192.168.112.171 2026-03-17 01:26:38.563197 | orchestrator | PING 192.168.112.171 (192.168.112.171) 56(84) bytes of data. 2026-03-17 01:26:38.563278 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=1 ttl=63 time=10.4 ms 2026-03-17 01:26:39.557928 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=2 ttl=63 time=3.38 ms 2026-03-17 01:26:40.558468 | orchestrator | 64 bytes from 192.168.112.171: icmp_seq=3 ttl=63 time=2.03 ms 2026-03-17 01:26:40.558547 | orchestrator | 2026-03-17 01:26:40.558569 | orchestrator | --- 192.168.112.171 ping statistics --- 2026-03-17 01:26:40.558589 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-17 01:26:40.558609 | orchestrator | rtt min/avg/max/mdev = 2.029/5.254/10.351/3.645 ms 2026-03-17 01:26:40.559325 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-17 01:26:40.559381 | orchestrator | + ping -c3 192.168.112.162 2026-03-17 01:26:40.572788 | orchestrator | PING 192.168.112.162 (192.168.112.162) 56(84) bytes of data. 2026-03-17 01:26:40.572836 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=1 ttl=63 time=8.89 ms 2026-03-17 01:26:41.567195 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=2 ttl=63 time=1.94 ms 2026-03-17 01:26:42.568174 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=3 ttl=63 time=1.82 ms 2026-03-17 01:26:42.568216 | orchestrator | 2026-03-17 01:26:42.568221 | orchestrator | --- 192.168.112.162 ping statistics --- 2026-03-17 01:26:42.568225 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-17 01:26:42.568229 | orchestrator | rtt min/avg/max/mdev = 1.817/4.214/8.888/3.304 ms 2026-03-17 01:26:42.570232 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-17 01:26:42.570261 | orchestrator | + ping -c3 192.168.112.102 2026-03-17 01:26:42.582947 | orchestrator | PING 192.168.112.102 (192.168.112.102) 56(84) bytes of data. 2026-03-17 01:26:42.582994 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=1 ttl=63 time=7.37 ms 2026-03-17 01:26:43.579875 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=2 ttl=63 time=2.49 ms 2026-03-17 01:26:44.580973 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=3 ttl=63 time=1.61 ms 2026-03-17 01:26:44.581036 | orchestrator | 2026-03-17 01:26:44.581048 | orchestrator | --- 192.168.112.102 ping statistics --- 2026-03-17 01:26:44.581057 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-17 01:26:44.581085 | orchestrator | rtt min/avg/max/mdev = 1.611/3.823/7.369/2.532 ms 2026-03-17 01:26:44.581094 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-17 01:26:44.581103 | orchestrator | + ping -c3 192.168.112.104 2026-03-17 01:26:44.593127 | orchestrator | PING 192.168.112.104 (192.168.112.104) 56(84) bytes of data. 2026-03-17 01:26:44.593205 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=1 ttl=63 time=7.87 ms 2026-03-17 01:26:45.589853 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=2 ttl=63 time=2.18 ms 2026-03-17 01:26:46.592213 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=3 ttl=63 time=2.10 ms 2026-03-17 01:26:46.592297 | orchestrator | 2026-03-17 01:26:46.592310 | orchestrator | --- 192.168.112.104 ping statistics --- 2026-03-17 01:26:46.592319 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-17 01:26:46.592337 | orchestrator | rtt min/avg/max/mdev = 2.096/4.048/7.870/2.702 ms 2026-03-17 01:26:46.593111 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-17 01:26:46.944844 | orchestrator | ok: Runtime: 0:07:56.153847 2026-03-17 01:26:46.995905 | 2026-03-17 01:26:46.996031 | TASK [Run tempest] 2026-03-17 01:26:47.765906 | orchestrator | 2026-03-17 01:26:47.766328 | orchestrator | # Tempest 2026-03-17 01:26:47.766366 | orchestrator | 2026-03-17 01:26:47.766383 | orchestrator | + set -e 2026-03-17 01:26:47.766405 | orchestrator | + source /opt/manager-vars.sh 2026-03-17 01:26:47.766426 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-17 01:26:47.766449 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-17 01:26:47.766493 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-17 01:26:47.766516 | orchestrator | ++ CEPH_VERSION=reef 2026-03-17 01:26:47.766534 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-17 01:26:47.766551 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-17 01:26:47.766576 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-17 01:26:47.766595 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-17 01:26:47.766609 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-17 01:26:47.766629 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-17 01:26:47.766643 | orchestrator | ++ export ARA=false 2026-03-17 01:26:47.766657 | orchestrator | ++ ARA=false 2026-03-17 01:26:47.766682 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-17 01:26:47.766696 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-17 01:26:47.766709 | orchestrator | ++ export TEMPEST=true 2026-03-17 01:26:47.766726 | orchestrator | ++ TEMPEST=true 2026-03-17 01:26:47.766739 | orchestrator | ++ export IS_ZUUL=true 2026-03-17 01:26:47.766752 | orchestrator | ++ IS_ZUUL=true 2026-03-17 01:26:47.766767 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.64 2026-03-17 01:26:47.766782 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.64 2026-03-17 01:26:47.766795 | orchestrator | ++ export EXTERNAL_API=false 2026-03-17 01:26:47.766808 | orchestrator | ++ EXTERNAL_API=false 2026-03-17 01:26:47.766822 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-17 01:26:47.766835 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-17 01:26:47.766848 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-17 01:26:47.766862 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-17 01:26:47.766875 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-17 01:26:47.766888 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-17 01:26:47.766902 | orchestrator | + echo 2026-03-17 01:26:47.766915 | orchestrator | + echo '# Tempest' 2026-03-17 01:26:47.766928 | orchestrator | + echo 2026-03-17 01:26:47.766941 | orchestrator | + [[ ! -e /opt/tempest ]] 2026-03-17 01:26:47.766956 | orchestrator | + osism apply tempest --skip-tags run-tempest 2026-03-17 01:26:59.838745 | orchestrator | 2026-03-17 01:26:59 | INFO  | Task 2616bd06-1e41-44cf-aef1-93b02b669e07 (tempest) was prepared for execution. 2026-03-17 01:26:59.838827 | orchestrator | 2026-03-17 01:26:59 | INFO  | It takes a moment until task 2616bd06-1e41-44cf-aef1-93b02b669e07 (tempest) has been started and output is visible here. 2026-03-17 01:28:13.633041 | orchestrator | 2026-03-17 01:28:13.633220 | orchestrator | PLAY [Run tempest] ************************************************************* 2026-03-17 01:28:13.633255 | orchestrator | 2026-03-17 01:28:13.633279 | orchestrator | TASK [osism.validations.tempest : Create tempest workdir] ********************** 2026-03-17 01:28:13.633310 | orchestrator | Tuesday 17 March 2026 01:27:04 +0000 (0:00:00.297) 0:00:00.297 ********* 2026-03-17 01:28:13.633331 | orchestrator | changed: [testbed-manager] 2026-03-17 01:28:13.633351 | orchestrator | 2026-03-17 01:28:13.633371 | orchestrator | TASK [osism.validations.tempest : Copy tempest wrapper script] ***************** 2026-03-17 01:28:13.633391 | orchestrator | Tuesday 17 March 2026 01:27:05 +0000 (0:00:00.733) 0:00:01.031 ********* 2026-03-17 01:28:13.633409 | orchestrator | changed: [testbed-manager] 2026-03-17 01:28:13.633428 | orchestrator | 2026-03-17 01:28:13.633448 | orchestrator | TASK [osism.validations.tempest : Check for existing tempest initialisation] *** 2026-03-17 01:28:13.633467 | orchestrator | Tuesday 17 March 2026 01:27:06 +0000 (0:00:01.211) 0:00:02.242 ********* 2026-03-17 01:28:13.633484 | orchestrator | ok: [testbed-manager] 2026-03-17 01:28:13.633504 | orchestrator | 2026-03-17 01:28:13.633523 | orchestrator | TASK [osism.validations.tempest : Init tempest] ******************************** 2026-03-17 01:28:13.633542 | orchestrator | Tuesday 17 March 2026 01:27:06 +0000 (0:00:00.408) 0:00:02.651 ********* 2026-03-17 01:28:13.633560 | orchestrator | changed: [testbed-manager] 2026-03-17 01:28:13.633580 | orchestrator | 2026-03-17 01:28:13.633599 | orchestrator | TASK [osism.validations.tempest : Resolve image IDs] *************************** 2026-03-17 01:28:13.633618 | orchestrator | Tuesday 17 March 2026 01:27:26 +0000 (0:00:19.778) 0:00:22.429 ********* 2026-03-17 01:28:13.633638 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.3) 2026-03-17 01:28:13.633690 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.2) 2026-03-17 01:28:13.633711 | orchestrator | 2026-03-17 01:28:13.633735 | orchestrator | TASK [osism.validations.tempest : Assert images have been resolved] ************ 2026-03-17 01:28:13.633756 | orchestrator | Tuesday 17 March 2026 01:27:33 +0000 (0:00:07.484) 0:00:29.913 ********* 2026-03-17 01:28:13.633773 | orchestrator | ok: [testbed-manager] => { 2026-03-17 01:28:13.633792 | orchestrator |  "changed": false, 2026-03-17 01:28:13.633810 | orchestrator |  "msg": "All assertions passed" 2026-03-17 01:28:13.633830 | orchestrator | } 2026-03-17 01:28:13.633847 | orchestrator | 2026-03-17 01:28:13.633867 | orchestrator | TASK [osism.validations.tempest : Get auth token] ****************************** 2026-03-17 01:28:13.633887 | orchestrator | Tuesday 17 March 2026 01:27:34 +0000 (0:00:00.168) 0:00:30.082 ********* 2026-03-17 01:28:13.633906 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 01:28:13.633923 | orchestrator | 2026-03-17 01:28:13.633941 | orchestrator | TASK [osism.validations.tempest : Get endpoint catalog] ************************ 2026-03-17 01:28:13.633960 | orchestrator | Tuesday 17 March 2026 01:27:37 +0000 (0:00:03.398) 0:00:33.481 ********* 2026-03-17 01:28:13.633977 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 01:28:13.633996 | orchestrator | 2026-03-17 01:28:13.634081 | orchestrator | TASK [osism.validations.tempest : Get service catalog] ************************* 2026-03-17 01:28:13.634099 | orchestrator | Tuesday 17 March 2026 01:27:39 +0000 (0:00:01.732) 0:00:35.213 ********* 2026-03-17 01:28:13.634109 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 01:28:13.634120 | orchestrator | 2026-03-17 01:28:13.634131 | orchestrator | TASK [osism.validations.tempest : Register img_file name] ********************** 2026-03-17 01:28:13.634165 | orchestrator | Tuesday 17 March 2026 01:27:42 +0000 (0:00:03.455) 0:00:38.668 ********* 2026-03-17 01:28:13.634184 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 01:28:13.634195 | orchestrator | 2026-03-17 01:28:13.634206 | orchestrator | TASK [osism.validations.tempest : Download img_file from image_ref] ************ 2026-03-17 01:28:13.634217 | orchestrator | Tuesday 17 March 2026 01:27:42 +0000 (0:00:00.188) 0:00:38.857 ********* 2026-03-17 01:28:13.634227 | orchestrator | changed: [testbed-manager] 2026-03-17 01:28:13.634238 | orchestrator | 2026-03-17 01:28:13.634250 | orchestrator | TASK [osism.validations.tempest : Install qemu-utils package] ****************** 2026-03-17 01:28:13.634261 | orchestrator | Tuesday 17 March 2026 01:27:45 +0000 (0:00:02.443) 0:00:41.301 ********* 2026-03-17 01:28:13.634271 | orchestrator | changed: [testbed-manager] 2026-03-17 01:28:13.634282 | orchestrator | 2026-03-17 01:28:13.634293 | orchestrator | TASK [osism.validations.tempest : Convert img_file to qcow2 format] ************ 2026-03-17 01:28:13.634304 | orchestrator | Tuesday 17 March 2026 01:27:55 +0000 (0:00:09.644) 0:00:50.945 ********* 2026-03-17 01:28:13.634314 | orchestrator | changed: [testbed-manager] 2026-03-17 01:28:13.634325 | orchestrator | 2026-03-17 01:28:13.634336 | orchestrator | TASK [osism.validations.tempest : Get network API extensions] ****************** 2026-03-17 01:28:13.634347 | orchestrator | Tuesday 17 March 2026 01:27:55 +0000 (0:00:00.769) 0:00:51.714 ********* 2026-03-17 01:28:13.634358 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 01:28:13.634368 | orchestrator | 2026-03-17 01:28:13.634379 | orchestrator | TASK [osism.validations.tempest : Revoke token] ******************************** 2026-03-17 01:28:13.634390 | orchestrator | Tuesday 17 March 2026 01:27:57 +0000 (0:00:01.476) 0:00:53.191 ********* 2026-03-17 01:28:13.634400 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 01:28:13.634411 | orchestrator | 2026-03-17 01:28:13.634422 | orchestrator | TASK [osism.validations.tempest : Set fact for config option api_extensions] *** 2026-03-17 01:28:13.634432 | orchestrator | Tuesday 17 March 2026 01:27:58 +0000 (0:00:01.470) 0:00:54.661 ********* 2026-03-17 01:28:13.634454 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 01:28:13.634465 | orchestrator | 2026-03-17 01:28:13.634475 | orchestrator | TASK [osism.validations.tempest : Set fact for config option img_file] ********* 2026-03-17 01:28:13.634486 | orchestrator | Tuesday 17 March 2026 01:27:58 +0000 (0:00:00.189) 0:00:54.850 ********* 2026-03-17 01:28:13.634509 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 01:28:13.634520 | orchestrator | 2026-03-17 01:28:13.634531 | orchestrator | TASK [osism.validations.tempest : Resolve floating network ID] ***************** 2026-03-17 01:28:13.634542 | orchestrator | Tuesday 17 March 2026 01:27:59 +0000 (0:00:00.185) 0:00:55.036 ********* 2026-03-17 01:28:13.634552 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 01:28:13.634563 | orchestrator | 2026-03-17 01:28:13.634574 | orchestrator | TASK [osism.validations.tempest : Assert floating network id has been resolved] *** 2026-03-17 01:28:13.634609 | orchestrator | Tuesday 17 March 2026 01:28:02 +0000 (0:00:03.709) 0:00:58.746 ********* 2026-03-17 01:28:13.634621 | orchestrator | ok: [testbed-manager -> localhost] => { 2026-03-17 01:28:13.634633 | orchestrator |  "changed": false, 2026-03-17 01:28:13.634644 | orchestrator |  "msg": "All assertions passed" 2026-03-17 01:28:13.634654 | orchestrator | } 2026-03-17 01:28:13.634665 | orchestrator | 2026-03-17 01:28:13.634676 | orchestrator | TASK [osism.validations.tempest : Resolve flavor IDs] ************************** 2026-03-17 01:28:13.634688 | orchestrator | Tuesday 17 March 2026 01:28:03 +0000 (0:00:00.175) 0:00:58.922 ********* 2026-03-17 01:28:13.634699 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1})  2026-03-17 01:28:13.634711 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2})  2026-03-17 01:28:13.634722 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:28:13.634732 | orchestrator | 2026-03-17 01:28:13.634743 | orchestrator | TASK [osism.validations.tempest : Assert flavors have been resolved] *********** 2026-03-17 01:28:13.634754 | orchestrator | Tuesday 17 March 2026 01:28:03 +0000 (0:00:00.359) 0:00:59.282 ********* 2026-03-17 01:28:13.634765 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:28:13.634775 | orchestrator | 2026-03-17 01:28:13.634786 | orchestrator | TASK [osism.validations.tempest : Get stats of exclude list] ******************* 2026-03-17 01:28:13.634797 | orchestrator | Tuesday 17 March 2026 01:28:03 +0000 (0:00:00.173) 0:00:59.455 ********* 2026-03-17 01:28:13.634807 | orchestrator | ok: [testbed-manager] 2026-03-17 01:28:13.634818 | orchestrator | 2026-03-17 01:28:13.634829 | orchestrator | TASK [osism.validations.tempest : Copy exclude list] *************************** 2026-03-17 01:28:13.634840 | orchestrator | Tuesday 17 March 2026 01:28:04 +0000 (0:00:00.462) 0:00:59.918 ********* 2026-03-17 01:28:13.634850 | orchestrator | changed: [testbed-manager] 2026-03-17 01:28:13.634861 | orchestrator | 2026-03-17 01:28:13.634872 | orchestrator | TASK [osism.validations.tempest : Get stats of include list] ******************* 2026-03-17 01:28:13.634883 | orchestrator | Tuesday 17 March 2026 01:28:04 +0000 (0:00:00.832) 0:01:00.751 ********* 2026-03-17 01:28:13.634893 | orchestrator | ok: [testbed-manager] 2026-03-17 01:28:13.634904 | orchestrator | 2026-03-17 01:28:13.634915 | orchestrator | TASK [osism.validations.tempest : Copy include list] *************************** 2026-03-17 01:28:13.634925 | orchestrator | Tuesday 17 March 2026 01:28:05 +0000 (0:00:00.440) 0:01:01.192 ********* 2026-03-17 01:28:13.634936 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:28:13.634947 | orchestrator | 2026-03-17 01:28:13.634957 | orchestrator | TASK [osism.validations.tempest : Create tempest flavors] ********************** 2026-03-17 01:28:13.634968 | orchestrator | Tuesday 17 March 2026 01:28:05 +0000 (0:00:00.126) 0:01:01.318 ********* 2026-03-17 01:28:13.634979 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1}) 2026-03-17 01:28:13.634990 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2}) 2026-03-17 01:28:13.635000 | orchestrator | 2026-03-17 01:28:13.635011 | orchestrator | TASK [osism.validations.tempest : Copy tempest.conf file] ********************** 2026-03-17 01:28:13.635021 | orchestrator | Tuesday 17 March 2026 01:28:12 +0000 (0:00:07.222) 0:01:08.541 ********* 2026-03-17 01:28:13.635032 | orchestrator | changed: [testbed-manager] 2026-03-17 01:28:13.635043 | orchestrator | 2026-03-17 01:28:13.635053 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:28:13.635072 | orchestrator | testbed-manager : ok=24  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-17 01:28:13.635083 | orchestrator | 2026-03-17 01:28:13.635094 | orchestrator | 2026-03-17 01:28:13.635105 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:28:13.635116 | orchestrator | Tuesday 17 March 2026 01:28:13 +0000 (0:00:00.985) 0:01:09.527 ********* 2026-03-17 01:28:13.635126 | orchestrator | =============================================================================== 2026-03-17 01:28:13.635137 | orchestrator | osism.validations.tempest : Init tempest ------------------------------- 19.78s 2026-03-17 01:28:13.635177 | orchestrator | osism.validations.tempest : Install qemu-utils package ------------------ 9.64s 2026-03-17 01:28:13.635196 | orchestrator | osism.validations.tempest : Resolve image IDs --------------------------- 7.48s 2026-03-17 01:28:13.635213 | orchestrator | osism.validations.tempest : Create tempest flavors ---------------------- 7.22s 2026-03-17 01:28:13.635231 | orchestrator | osism.validations.tempest : Resolve floating network ID ----------------- 3.71s 2026-03-17 01:28:13.635249 | orchestrator | osism.validations.tempest : Get service catalog ------------------------- 3.46s 2026-03-17 01:28:13.635267 | orchestrator | osism.validations.tempest : Get auth token ------------------------------ 3.40s 2026-03-17 01:28:13.635286 | orchestrator | osism.validations.tempest : Download img_file from image_ref ------------ 2.44s 2026-03-17 01:28:13.635306 | orchestrator | osism.validations.tempest : Get endpoint catalog ------------------------ 1.73s 2026-03-17 01:28:13.635325 | orchestrator | osism.validations.tempest : Get network API extensions ------------------ 1.48s 2026-03-17 01:28:13.635353 | orchestrator | osism.validations.tempest : Revoke token -------------------------------- 1.47s 2026-03-17 01:28:13.635373 | orchestrator | osism.validations.tempest : Copy tempest wrapper script ----------------- 1.21s 2026-03-17 01:28:13.635391 | orchestrator | osism.validations.tempest : Copy tempest.conf file ---------------------- 0.99s 2026-03-17 01:28:13.635410 | orchestrator | osism.validations.tempest : Copy exclude list --------------------------- 0.83s 2026-03-17 01:28:13.635428 | orchestrator | osism.validations.tempest : Convert img_file to qcow2 format ------------ 0.77s 2026-03-17 01:28:13.635446 | orchestrator | osism.validations.tempest : Create tempest workdir ---------------------- 0.73s 2026-03-17 01:28:13.635466 | orchestrator | osism.validations.tempest : Get stats of exclude list ------------------- 0.46s 2026-03-17 01:28:13.635498 | orchestrator | osism.validations.tempest : Get stats of include list ------------------- 0.44s 2026-03-17 01:28:13.982610 | orchestrator | osism.validations.tempest : Check for existing tempest initialisation --- 0.41s 2026-03-17 01:28:13.982697 | orchestrator | osism.validations.tempest : Resolve flavor IDs -------------------------- 0.36s 2026-03-17 01:28:14.369255 | orchestrator | + sed -i '/log_dir =/d' /opt/tempest/etc/tempest.conf 2026-03-17 01:28:14.374133 | orchestrator | + sed -i '/log_file =/d' /opt/tempest/etc/tempest.conf 2026-03-17 01:28:14.432017 | orchestrator | 2026-03-17 01:28:14.432074 | orchestrator | ## IDENTITY (API) 2026-03-17 01:28:14.432083 | orchestrator | 2026-03-17 01:28:14.432090 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-17 01:28:14.432098 | orchestrator | + echo 2026-03-17 01:28:14.432104 | orchestrator | + echo '## IDENTITY (API)' 2026-03-17 01:28:14.432111 | orchestrator | + echo 2026-03-17 01:28:14.432118 | orchestrator | + _tempest tempest.api.identity.v3 2026-03-17 01:28:14.432125 | orchestrator | + local regex=tempest.api.identity.v3 2026-03-17 01:28:14.432154 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16 2026-03-17 01:28:14.433126 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-17 01:28:14.441411 | orchestrator | + tee -a /opt/tempest/20260317-0128.log 2026-03-17 01:28:18.147226 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-17 01:28:18.147344 | orchestrator | Did you mean one of these? 2026-03-17 01:28:18.147412 | orchestrator | help 2026-03-17 01:28:18.147435 | orchestrator | init 2026-03-17 01:28:18.517463 | orchestrator | 2026-03-17 01:28:18.517544 | orchestrator | ## IMAGE (API) 2026-03-17 01:28:18.517553 | orchestrator | 2026-03-17 01:28:18.517560 | orchestrator | + echo 2026-03-17 01:28:18.517567 | orchestrator | + echo '## IMAGE (API)' 2026-03-17 01:28:18.517575 | orchestrator | + echo 2026-03-17 01:28:18.517582 | orchestrator | + _tempest tempest.api.image.v2 2026-03-17 01:28:18.517589 | orchestrator | + local regex=tempest.api.image.v2 2026-03-17 01:28:18.517598 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16 2026-03-17 01:28:18.517626 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-17 01:28:18.519536 | orchestrator | + tee -a /opt/tempest/20260317-0128.log 2026-03-17 01:28:22.108064 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-17 01:28:22.108175 | orchestrator | Did you mean one of these? 2026-03-17 01:28:22.108191 | orchestrator | help 2026-03-17 01:28:22.108200 | orchestrator | init 2026-03-17 01:28:22.501760 | orchestrator | 2026-03-17 01:28:22.501842 | orchestrator | ## NETWORK (API) 2026-03-17 01:28:22.501854 | orchestrator | 2026-03-17 01:28:22.501861 | orchestrator | + echo 2026-03-17 01:28:22.501868 | orchestrator | + echo '## NETWORK (API)' 2026-03-17 01:28:22.501877 | orchestrator | + echo 2026-03-17 01:28:22.501884 | orchestrator | + _tempest tempest.api.network 2026-03-17 01:28:22.501892 | orchestrator | + local regex=tempest.api.network 2026-03-17 01:28:22.501932 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16 2026-03-17 01:28:22.504186 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-17 01:28:22.505960 | orchestrator | + tee -a /opt/tempest/20260317-0128.log 2026-03-17 01:28:25.957678 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-17 01:28:25.957786 | orchestrator | Did you mean one of these? 2026-03-17 01:28:25.957804 | orchestrator | help 2026-03-17 01:28:25.957818 | orchestrator | init 2026-03-17 01:28:26.291816 | orchestrator | 2026-03-17 01:28:26.291896 | orchestrator | ## VOLUME (API) 2026-03-17 01:28:26.291908 | orchestrator | 2026-03-17 01:28:26.291915 | orchestrator | + echo 2026-03-17 01:28:26.291922 | orchestrator | + echo '## VOLUME (API)' 2026-03-17 01:28:26.291931 | orchestrator | + echo 2026-03-17 01:28:26.291937 | orchestrator | + _tempest tempest.api.volume 2026-03-17 01:28:26.291945 | orchestrator | + local regex=tempest.api.volume 2026-03-17 01:28:26.291965 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16 2026-03-17 01:28:26.291986 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-17 01:28:26.293229 | orchestrator | + tee -a /opt/tempest/20260317-0128.log 2026-03-17 01:28:29.564917 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-17 01:28:29.565986 | orchestrator | Did you mean one of these? 2026-03-17 01:28:29.566066 | orchestrator | help 2026-03-17 01:28:29.566073 | orchestrator | init 2026-03-17 01:28:29.825490 | orchestrator | 2026-03-17 01:28:29.825573 | orchestrator | ## COMPUTE (API) 2026-03-17 01:28:29.825583 | orchestrator | 2026-03-17 01:28:29.825593 | orchestrator | + echo 2026-03-17 01:28:29.825601 | orchestrator | + echo '## COMPUTE (API)' 2026-03-17 01:28:29.825608 | orchestrator | + echo 2026-03-17 01:28:29.825615 | orchestrator | + _tempest tempest.api.compute 2026-03-17 01:28:29.825621 | orchestrator | + local regex=tempest.api.compute 2026-03-17 01:28:29.825648 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16 2026-03-17 01:28:29.825915 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-17 01:28:29.828205 | orchestrator | + tee -a /opt/tempest/20260317-0128.log 2026-03-17 01:28:33.320919 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-17 01:28:33.321045 | orchestrator | Did you mean one of these? 2026-03-17 01:28:33.321072 | orchestrator | help 2026-03-17 01:28:33.321091 | orchestrator | init 2026-03-17 01:28:33.664356 | orchestrator | 2026-03-17 01:28:33.664431 | orchestrator | ## DNS (API) 2026-03-17 01:28:33.664440 | orchestrator | 2026-03-17 01:28:33.664446 | orchestrator | + echo 2026-03-17 01:28:33.664452 | orchestrator | + echo '## DNS (API)' 2026-03-17 01:28:33.664459 | orchestrator | + echo 2026-03-17 01:28:33.664466 | orchestrator | + _tempest designate_tempest_plugin.tests.api.v2 2026-03-17 01:28:33.664473 | orchestrator | + local regex=designate_tempest_plugin.tests.api.v2 2026-03-17 01:28:33.664481 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16 2026-03-17 01:28:33.666562 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-17 01:28:33.670280 | orchestrator | + tee -a /opt/tempest/20260317-0128.log 2026-03-17 01:28:37.192082 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-17 01:28:37.192205 | orchestrator | Did you mean one of these? 2026-03-17 01:28:37.192218 | orchestrator | help 2026-03-17 01:28:37.192225 | orchestrator | init 2026-03-17 01:28:37.552673 | orchestrator | 2026-03-17 01:28:37.552749 | orchestrator | ## OBJECT-STORE (API) 2026-03-17 01:28:37.552776 | orchestrator | 2026-03-17 01:28:37.552796 | orchestrator | + echo 2026-03-17 01:28:37.552807 | orchestrator | + echo '## OBJECT-STORE (API)' 2026-03-17 01:28:37.552817 | orchestrator | + echo 2026-03-17 01:28:37.552828 | orchestrator | + _tempest tempest.api.object_storage 2026-03-17 01:28:37.552839 | orchestrator | + local regex=tempest.api.object_storage 2026-03-17 01:28:37.553723 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16 2026-03-17 01:28:37.554341 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-17 01:28:37.555935 | orchestrator | + tee -a /opt/tempest/20260317-0128.log 2026-03-17 01:28:41.056811 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-17 01:28:41.056903 | orchestrator | Did you mean one of these? 2026-03-17 01:28:41.056919 | orchestrator | help 2026-03-17 01:28:41.056932 | orchestrator | init 2026-03-17 01:28:41.604665 | orchestrator | ok: Runtime: 0:01:54.090700 2026-03-17 01:28:41.629720 | 2026-03-17 01:28:41.629888 | TASK [Check prometheus alert status] 2026-03-17 01:28:42.165103 | orchestrator | skipping: Conditional result was False 2026-03-17 01:28:42.168805 | 2026-03-17 01:28:42.168983 | PLAY RECAP 2026-03-17 01:28:42.169126 | orchestrator | ok: 25 changed: 12 unreachable: 0 failed: 0 skipped: 4 rescued: 0 ignored: 0 2026-03-17 01:28:42.169195 | 2026-03-17 01:28:42.405504 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-03-17 01:28:42.406658 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-17 01:28:43.205874 | 2026-03-17 01:28:43.206045 | PLAY [Post output play] 2026-03-17 01:28:43.222973 | 2026-03-17 01:28:43.223122 | LOOP [stage-output : Register sources] 2026-03-17 01:28:43.294468 | 2026-03-17 01:28:43.294769 | TASK [stage-output : Check sudo] 2026-03-17 01:28:44.208791 | orchestrator | sudo: a password is required 2026-03-17 01:28:44.334569 | orchestrator | ok: Runtime: 0:00:00.009723 2026-03-17 01:28:44.350687 | 2026-03-17 01:28:44.350902 | LOOP [stage-output : Set source and destination for files and folders] 2026-03-17 01:28:44.390270 | 2026-03-17 01:28:44.390579 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-03-17 01:28:44.459273 | orchestrator | ok 2026-03-17 01:28:44.468267 | 2026-03-17 01:28:44.468442 | LOOP [stage-output : Ensure target folders exist] 2026-03-17 01:28:44.938257 | orchestrator | ok: "docs" 2026-03-17 01:28:44.938618 | 2026-03-17 01:28:45.228434 | orchestrator | ok: "artifacts" 2026-03-17 01:28:45.501450 | orchestrator | ok: "logs" 2026-03-17 01:28:45.516516 | 2026-03-17 01:28:45.516681 | LOOP [stage-output : Copy files and folders to staging folder] 2026-03-17 01:28:45.553069 | 2026-03-17 01:28:45.553342 | TASK [stage-output : Make all log files readable] 2026-03-17 01:28:45.866095 | orchestrator | ok 2026-03-17 01:28:45.875553 | 2026-03-17 01:28:45.875710 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-03-17 01:28:45.910592 | orchestrator | skipping: Conditional result was False 2026-03-17 01:28:45.927321 | 2026-03-17 01:28:45.927509 | TASK [stage-output : Discover log files for compression] 2026-03-17 01:28:45.962423 | orchestrator | skipping: Conditional result was False 2026-03-17 01:28:45.978100 | 2026-03-17 01:28:45.978284 | LOOP [stage-output : Archive everything from logs] 2026-03-17 01:28:46.019067 | 2026-03-17 01:28:46.019235 | PLAY [Post cleanup play] 2026-03-17 01:28:46.027321 | 2026-03-17 01:28:46.027451 | TASK [Set cloud fact (Zuul deployment)] 2026-03-17 01:28:46.081345 | orchestrator | ok 2026-03-17 01:28:46.093101 | 2026-03-17 01:28:46.093259 | TASK [Set cloud fact (local deployment)] 2026-03-17 01:28:46.119830 | orchestrator | skipping: Conditional result was False 2026-03-17 01:28:46.136220 | 2026-03-17 01:28:46.136375 | TASK [Clean the cloud environment] 2026-03-17 01:28:49.583806 | orchestrator | 2026-03-17 01:28:49 - clean up servers 2026-03-17 01:28:50.329504 | orchestrator | 2026-03-17 01:28:50 - testbed-manager 2026-03-17 01:28:50.419576 | orchestrator | 2026-03-17 01:28:50 - testbed-node-0 2026-03-17 01:28:50.515616 | orchestrator | 2026-03-17 01:28:50 - testbed-node-1 2026-03-17 01:28:50.596957 | orchestrator | 2026-03-17 01:28:50 - testbed-node-2 2026-03-17 01:28:50.679431 | orchestrator | 2026-03-17 01:28:50 - testbed-node-5 2026-03-17 01:28:50.784699 | orchestrator | 2026-03-17 01:28:50 - testbed-node-3 2026-03-17 01:28:50.877723 | orchestrator | 2026-03-17 01:28:50 - testbed-node-4 2026-03-17 01:28:50.969462 | orchestrator | 2026-03-17 01:28:50 - clean up keypairs 2026-03-17 01:28:50.985646 | orchestrator | 2026-03-17 01:28:50 - testbed 2026-03-17 01:28:51.008840 | orchestrator | 2026-03-17 01:28:51 - wait for servers to be gone 2026-03-17 01:29:01.866868 | orchestrator | 2026-03-17 01:29:01 - clean up ports 2026-03-17 01:29:02.072590 | orchestrator | 2026-03-17 01:29:02 - 2625b5e3-ffd1-484b-8d4f-a5f8bc1dede9 2026-03-17 01:29:02.349373 | orchestrator | 2026-03-17 01:29:02 - 475bb383-76e6-4125-9220-192d6c383195 2026-03-17 01:29:02.683517 | orchestrator | 2026-03-17 01:29:02 - 4d10fad2-8948-4e54-bbc0-aa471bd27155 2026-03-17 01:29:03.076617 | orchestrator | 2026-03-17 01:29:03 - 7fddc1df-c69a-454f-8291-5c2bbccd29e6 2026-03-17 01:29:03.301908 | orchestrator | 2026-03-17 01:29:03 - b27de01c-fe8d-4672-9b2c-52429c1c2be6 2026-03-17 01:29:03.522802 | orchestrator | 2026-03-17 01:29:03 - b66ec9d8-576c-48cf-a1ab-ccd4c9f9b4f1 2026-03-17 01:29:03.719572 | orchestrator | 2026-03-17 01:29:03 - d51a6e7f-1f1e-4372-9df7-6b7700f5db01 2026-03-17 01:29:03.920410 | orchestrator | 2026-03-17 01:29:03 - clean up volumes 2026-03-17 01:29:04.037256 | orchestrator | 2026-03-17 01:29:04 - testbed-volume-1-node-base 2026-03-17 01:29:04.078923 | orchestrator | 2026-03-17 01:29:04 - testbed-volume-manager-base 2026-03-17 01:29:04.123371 | orchestrator | 2026-03-17 01:29:04 - testbed-volume-2-node-base 2026-03-17 01:29:04.165056 | orchestrator | 2026-03-17 01:29:04 - testbed-volume-3-node-base 2026-03-17 01:29:04.207438 | orchestrator | 2026-03-17 01:29:04 - testbed-volume-4-node-base 2026-03-17 01:29:04.250288 | orchestrator | 2026-03-17 01:29:04 - testbed-volume-0-node-base 2026-03-17 01:29:04.299653 | orchestrator | 2026-03-17 01:29:04 - testbed-volume-5-node-base 2026-03-17 01:29:04.347287 | orchestrator | 2026-03-17 01:29:04 - testbed-volume-6-node-3 2026-03-17 01:29:04.391770 | orchestrator | 2026-03-17 01:29:04 - testbed-volume-7-node-4 2026-03-17 01:29:04.437812 | orchestrator | 2026-03-17 01:29:04 - testbed-volume-3-node-3 2026-03-17 01:29:04.483422 | orchestrator | 2026-03-17 01:29:04 - testbed-volume-8-node-5 2026-03-17 01:29:04.526534 | orchestrator | 2026-03-17 01:29:04 - testbed-volume-1-node-4 2026-03-17 01:29:04.568864 | orchestrator | 2026-03-17 01:29:04 - testbed-volume-5-node-5 2026-03-17 01:29:04.619741 | orchestrator | 2026-03-17 01:29:04 - testbed-volume-4-node-4 2026-03-17 01:29:04.662853 | orchestrator | 2026-03-17 01:29:04 - testbed-volume-0-node-3 2026-03-17 01:29:04.703652 | orchestrator | 2026-03-17 01:29:04 - testbed-volume-2-node-5 2026-03-17 01:29:04.740642 | orchestrator | 2026-03-17 01:29:04 - disconnect routers 2026-03-17 01:29:04.854122 | orchestrator | 2026-03-17 01:29:04 - testbed 2026-03-17 01:29:06.243821 | orchestrator | 2026-03-17 01:29:06 - clean up subnets 2026-03-17 01:29:06.295356 | orchestrator | 2026-03-17 01:29:06 - subnet-testbed-management 2026-03-17 01:29:06.467443 | orchestrator | 2026-03-17 01:29:06 - clean up networks 2026-03-17 01:29:06.604324 | orchestrator | 2026-03-17 01:29:06 - net-testbed-management 2026-03-17 01:29:06.931487 | orchestrator | 2026-03-17 01:29:06 - clean up security groups 2026-03-17 01:29:06.972905 | orchestrator | 2026-03-17 01:29:06 - testbed-management 2026-03-17 01:29:07.093680 | orchestrator | 2026-03-17 01:29:07 - testbed-node 2026-03-17 01:29:07.233841 | orchestrator | 2026-03-17 01:29:07 - clean up floating ips 2026-03-17 01:29:07.270517 | orchestrator | 2026-03-17 01:29:07 - 81.163.193.64 2026-03-17 01:29:07.638592 | orchestrator | 2026-03-17 01:29:07 - clean up routers 2026-03-17 01:29:07.739844 | orchestrator | 2026-03-17 01:29:07 - testbed 2026-03-17 01:29:08.691550 | orchestrator | ok: Runtime: 0:00:22.090432 2026-03-17 01:29:08.696082 | 2026-03-17 01:29:08.696289 | PLAY RECAP 2026-03-17 01:29:08.696477 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-03-17 01:29:08.696551 | 2026-03-17 01:29:08.831407 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-17 01:29:08.834062 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-17 01:29:09.573136 | 2026-03-17 01:29:09.573308 | PLAY [Cleanup play] 2026-03-17 01:29:09.589502 | 2026-03-17 01:29:09.589654 | TASK [Set cloud fact (Zuul deployment)] 2026-03-17 01:29:09.658626 | orchestrator | ok 2026-03-17 01:29:09.668608 | 2026-03-17 01:29:09.668753 | TASK [Set cloud fact (local deployment)] 2026-03-17 01:29:09.703276 | orchestrator | skipping: Conditional result was False 2026-03-17 01:29:09.720855 | 2026-03-17 01:29:09.721011 | TASK [Clean the cloud environment] 2026-03-17 01:29:10.898916 | orchestrator | 2026-03-17 01:29:10 - clean up servers 2026-03-17 01:29:11.386308 | orchestrator | 2026-03-17 01:29:11 - clean up keypairs 2026-03-17 01:29:11.407655 | orchestrator | 2026-03-17 01:29:11 - wait for servers to be gone 2026-03-17 01:29:11.454512 | orchestrator | 2026-03-17 01:29:11 - clean up ports 2026-03-17 01:29:11.542503 | orchestrator | 2026-03-17 01:29:11 - clean up volumes 2026-03-17 01:29:11.616269 | orchestrator | 2026-03-17 01:29:11 - disconnect routers 2026-03-17 01:29:11.647372 | orchestrator | 2026-03-17 01:29:11 - clean up subnets 2026-03-17 01:29:11.673109 | orchestrator | 2026-03-17 01:29:11 - clean up networks 2026-03-17 01:29:12.299362 | orchestrator | 2026-03-17 01:29:12 - clean up security groups 2026-03-17 01:29:12.337487 | orchestrator | 2026-03-17 01:29:12 - clean up floating ips 2026-03-17 01:29:12.368880 | orchestrator | 2026-03-17 01:29:12 - clean up routers 2026-03-17 01:29:12.762154 | orchestrator | ok: Runtime: 0:00:01.872005 2026-03-17 01:29:12.766065 | 2026-03-17 01:29:12.766222 | PLAY RECAP 2026-03-17 01:29:12.766343 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-03-17 01:29:12.766432 | 2026-03-17 01:29:12.894243 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-17 01:29:12.896824 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-17 01:29:13.640791 | 2026-03-17 01:29:13.640953 | PLAY [Base post-fetch] 2026-03-17 01:29:13.656794 | 2026-03-17 01:29:13.656932 | TASK [fetch-output : Set log path for multiple nodes] 2026-03-17 01:29:13.712770 | orchestrator | skipping: Conditional result was False 2026-03-17 01:29:13.727353 | 2026-03-17 01:29:13.727589 | TASK [fetch-output : Set log path for single node] 2026-03-17 01:29:13.786302 | orchestrator | ok 2026-03-17 01:29:13.796625 | 2026-03-17 01:29:13.796788 | LOOP [fetch-output : Ensure local output dirs] 2026-03-17 01:29:14.306169 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/9d2318408dc845a1bb8697a007f9fb34/work/logs" 2026-03-17 01:29:14.587557 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/9d2318408dc845a1bb8697a007f9fb34/work/artifacts" 2026-03-17 01:29:14.875769 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/9d2318408dc845a1bb8697a007f9fb34/work/docs" 2026-03-17 01:29:14.896013 | 2026-03-17 01:29:14.896229 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-17 01:29:15.876839 | orchestrator | changed: .d..t...... ./ 2026-03-17 01:29:15.877181 | orchestrator | changed: All items complete 2026-03-17 01:29:15.877237 | 2026-03-17 01:29:16.583850 | orchestrator | changed: .d..t...... ./ 2026-03-17 01:29:17.318867 | orchestrator | changed: .d..t...... ./ 2026-03-17 01:29:17.349650 | 2026-03-17 01:29:17.349789 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-17 01:29:17.389365 | orchestrator | skipping: Conditional result was False 2026-03-17 01:29:17.392329 | orchestrator | skipping: Conditional result was False 2026-03-17 01:29:17.402095 | 2026-03-17 01:29:17.402181 | PLAY RECAP 2026-03-17 01:29:17.402232 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-17 01:29:17.402259 | 2026-03-17 01:29:17.528957 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-17 01:29:17.531506 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-17 01:29:18.261142 | 2026-03-17 01:29:18.261303 | PLAY [Base post] 2026-03-17 01:29:18.275914 | 2026-03-17 01:29:18.276056 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-17 01:29:19.329007 | orchestrator | changed 2026-03-17 01:29:19.337983 | 2026-03-17 01:29:19.338103 | PLAY RECAP 2026-03-17 01:29:19.338184 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-17 01:29:19.338252 | 2026-03-17 01:29:19.459430 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-17 01:29:19.460512 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-17 01:29:20.267114 | 2026-03-17 01:29:20.267362 | PLAY [Base post-logs] 2026-03-17 01:29:20.278827 | 2026-03-17 01:29:20.279035 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-17 01:29:20.754184 | localhost | changed 2026-03-17 01:29:20.772872 | 2026-03-17 01:29:20.773097 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-17 01:29:20.801617 | localhost | ok 2026-03-17 01:29:20.808303 | 2026-03-17 01:29:20.808484 | TASK [Set zuul-log-path fact] 2026-03-17 01:29:20.827054 | localhost | ok 2026-03-17 01:29:20.841995 | 2026-03-17 01:29:20.842139 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-17 01:29:20.868873 | localhost | ok 2026-03-17 01:29:20.873996 | 2026-03-17 01:29:20.874148 | TASK [upload-logs : Create log directories] 2026-03-17 01:29:21.442654 | localhost | changed 2026-03-17 01:29:21.446334 | 2026-03-17 01:29:21.446475 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-17 01:29:21.979977 | localhost -> localhost | ok: Runtime: 0:00:00.008249 2026-03-17 01:29:21.985902 | 2026-03-17 01:29:21.986065 | TASK [upload-logs : Upload logs to log server] 2026-03-17 01:29:22.565320 | localhost | Output suppressed because no_log was given 2026-03-17 01:29:22.570230 | 2026-03-17 01:29:22.570546 | LOOP [upload-logs : Compress console log and json output] 2026-03-17 01:29:22.628465 | localhost | skipping: Conditional result was False 2026-03-17 01:29:22.634071 | localhost | skipping: Conditional result was False 2026-03-17 01:29:22.646114 | 2026-03-17 01:29:22.646347 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-17 01:29:22.708037 | localhost | skipping: Conditional result was False 2026-03-17 01:29:22.708686 | 2026-03-17 01:29:22.713408 | localhost | skipping: Conditional result was False 2026-03-17 01:29:22.721620 | 2026-03-17 01:29:22.721888 | LOOP [upload-logs : Upload console log and json output]